Phenomenological, experimental, analytical methods for studying processes and apparatuses. Analytical identification method

Physical processes can be studied by analytical or experimental methods.

Analytical methods allow you to study processes based on mathematical models, which can be presented in the form of functions, equations, systems of equations, mainly differential or integral. Usually, a rough model is created at the beginning, which is then refined after research. This model makes it possible to study the physical essence of the phenomenon quite fully.

However, they have significant disadvantages. In order to find a particular solution from the entire class that is inherent only to a given process, it is necessary to set uniqueness conditions. Often, incorrect acceptance of boundary conditions leads to a distortion of the physical essence of the phenomenon, and finding an analytical expression that most realistically reflects this phenomenon is either completely impossible or extremely difficult.

Experimental methods make it possible to deeply study processes within the accuracy of the experimental technique, especially those parameters that represent greatest interest. However, the results of a particular experiment cannot be extended to another process, even one that is very similar in nature. In addition, it is difficult to establish from experience which parameters have a decisive influence on the course of the process, and how the process will proceed if various parameters change simultaneously. Experimental methods make it possible to establish only partial dependencies between individual variables in strictly defined intervals. Using these dependencies outside of these intervals can lead to serious errors.

Thus, both analytical and experimental methods have their advantages and disadvantages. Therefore, the combination of the positive aspects of these research methods is extremely fruitful. This principle is the basis for the methods of combining analytical and experimental research, which, in turn, are based on the methods of analogy, similarity and dimensions.

Method of analogy. The analogy method is used when different physical phenomena are described by identical differential equations.

Let's look at the essence of the analogy method using an example. Heat flow depends on the temperature difference (Fourier's law):

Where λ - coefficient of thermal conductivity.

Mass transfer or transfer of a substance (gas, steam, moisture, dust) is determined by a difference in the concentration of the substance WITH(Fick's law):

– mass transfer coefficient.

The transfer of electricity through a conductor with linear resistance is determined by a voltage drop (Ohm's law):

Where ρ – electrical conductivity coefficient.

Three different physical phenomena have identical mathematical expressions. Thus, they can be studied by analogy. Moreover, depending on what is accepted as the original and the model, there may be different kinds modeling. So, if the heat flow q Since they are studied on a model with fluid movement, the modeling is called hydraulic; if it is examined for electric model, modeling is called electrical.

The identity of mathematical expressions does not mean that the processes are absolutely similar. In order to study the process of the original using a model, it is necessary to comply with the criteria of analogy. Compare directly q t and q e, thermal conductivity coefficients λ and electrical conductivity ρ , temperature T and voltage U it makes no sense. To eliminate this incomparability, both equations must be presented in dimensionless quantities. Each variable P should be represented as a product of constant dimension P n to a dimensionless variable P b:

P= P p∙ P b. (4.25)

With (4.25) in mind, we write down expressions for q t and q e in the following form:

Let us substitute the values ​​of the transformed variables into equations (4.22) and (4.24), resulting in:

;

Both equations are written in dimensionless form and can be compared. The equations will be identical if

This equality is called the criterion of analogy. Using criteria, the parameters of the model are determined based on the original equation of the object.

Currently, electrical modeling is widely used. With its help, you can study various physical processes (oscillations, filtration, mass transfer, heat transfer, stress distribution). This simulation is universal, easy to use, and does not require bulky equipment. In electrical modeling, analog computers (AVMs) are used. By which, as we have already said, we mean a certain combination of various electrical elements, in which processes occur that are described by mathematical dependencies similar to those for the object under study (original). A significant disadvantage of AVM is its relatively low accuracy and lack of versatility, since for each task it is necessary to have its own circuit, and therefore another machine.

To solve problems, other methods of electrical modeling are also used: the method of continuum, electrical grids, electromechanical analogy, electrohydrodynamic analogy, etc. Planar problems are modeled using electrically conductive paper, volumetric problems are modeled using electrolytic baths.

Dimensional method. In a number of cases, processes occur that cannot be directly described by differential equations. Dependency between variable quantities in such cases can be established experimentally. In order to limit the experiment and find the connection between the main characteristics of the process, it is effective to use the dimensional analysis method.

Dimensional analysis is a method of establishing the relationship between the physical parameters of the phenomenon being studied. It is based on the study of the dimensions of these quantities.

Measurement physical characteristics Q means comparing it with another parameter q of the same nature, that is, you need to determine how many times Q more than q. In this case q is a unit of measurement.

Units of measurement make up a system of units, such as the International System of Measures (SI). The system includes units of measurement that are independent of one another, they are called basic or primary units. In the SI system these are: mass (kilogram), length (meter), time (second), current (ampere), temperature (degree Kelvin), luminous intensity (candela).

Units of measurement of other quantities are called derivatives or secondary. They are expressed using basic units. The formula that establishes the relationship between basic and derived units is called dimension. For example, the speed dimension V is

Where Lsymbol length, and T– time.

These symbols represent independent units of the unit system ( T measured in seconds, minutes, hours, etc., L in meters, centimeters, etc.). The dimension is derived using an equation, which in the case of speed has the following form:

from which follows the dimension formula for speed. Dimensional analysis is based on the following rule: the dimension of a physical quantity is the product of basic units of measurement raised to the appropriate power.

In mechanics, as a rule, three basic units of measurement are used: mass, length and time. Thus, in accordance with the above rule, we can write:

(4.28)

Where N– designation of the derived unit of measurement;

L, M, T– designations of basic (length, mass, time) units;

l, m, t– unknown indicators that can be presented in whole or fractional numbers, positive or negative.

There are quantities whose dimension consists of basic units to a power equal to zero. These are the so-called dimensionless quantities. For example, the rock loosening coefficient is the ratio of two volumes, from which

therefore, the loosening coefficient is a dimensionless quantity.

If during the experiment it is established that the quantity being determined can depend on several other quantities, then in this case it is possible to create a dimensional equation in which the symbol of the quantity being studied is located on the left side, and the product of other quantities is on the right. The symbols on the right side have their own unknown exponents. To finally obtain the relationship between physical quantities, it is necessary to determine the corresponding exponents.

For example, you need to determine the time t, spent by a body having mass m, when moving straight along the path l under constant force f. Therefore, time depends on length, mass and force. In this case, the dimensional equation will be written as follows:

The left side of the equation can be represented as . If the physical quantities of the phenomenon being studied are chosen correctly, then the dimensions on the left and right sides of the equation should be equal. Then the system of equations of exponents will be written:

Then x=y=1/2 and z = –1/2.

This means that time depends on the path as , on the mass as and on the force as . However, it is impossible to obtain a final solution to the problem using dimensional analysis. You can only establish a general form of dependence:

Where k– dimensionless proportionality coefficient, which is determined by experiment.

In this way, the type of formula and experimental conditions are found. It is only necessary to determine the relationship between two quantities: and A, Where A= .

If the dimensions of the left and right sides of the equation are equal, this means that the formula in question is analytical and calculations can be performed in any system of units. On the contrary, if an empirical formula is used, it is necessary to know the dimensions of all terms of this formula.

Using dimensional analysis, we can answer the question: have we lost the main parameters that influence this process? In other words, is the equation found complete or not?

Suppose that in the previous example the body heats up when moving and therefore time also depends on temperature WITH.

Then the dimensional equation will be written:

Where is it easy to find that, i.e. the process being studied does not depend on temperature and equation (4.29) is complete. Our assumption is wrong.

Thus, dimensional analysis allows:

– find dimensionless relationships (similarity criteria) to facilitate experimental studies;

– select the parameters influencing the phenomenon under study in order to find an analytical solution to the problem;

– check the correctness of analytical formulas.

The dimensional analysis method is very often used in research and in more complex cases than the example discussed. It allows you to obtain functional dependencies in a criterion form. Let the function be known in general form F for any complex process

(4.30)

Values ​​have a specific unit dimension. The dimensional method involves choosing from a number k three basic units of measurement independent from each other. The rest ( k–3) the quantities included in the functional dependence (4.30) are chosen so that they are represented in the function F as dimensionless, i.e. in similarity criteria. Conversions are made using the basic, selected units of measurement. In this case, function (4.30) takes the form:

Three ones means the first three numbers are a ratio n 1 , n 2 and n 3 to correspondingly equal values A, V, With. Expression (4.30) is analyzed according to the dimensions of the quantities. As a result, the numerical values ​​of the exponents are established XX 3 , atat 3 , zz 3 and determine the similarity criteria.

A clear example of the use of the dimensional analysis method in the development of analytical and experimental methods is the calculation method of Yu.Z. Zaslavsky, which makes it possible to determine the parameters of the support of a single mine.


LECTURE 8

Similarity theory. Similarity theory is the doctrine of the similarity of physical phenomena. Its use is most effective in the case when it is impossible to find dependencies between variables based on the solution of differential equations. In this case, using the data of the preliminary experiment, an equation is created using the similarity method, the solution of which can be extended beyond the experiment. This method theoretical research phenomena and processes is possible only on the basis of combination with experimental data.

Similarity theory establishes criteria for the similarity of various physical phenomena and, using these criteria, explores the properties of the phenomena. Similarity criteria represent dimensionless ratios of dimensional physical quantities, defining the phenomena being studied.

The use of similarity theory gives important practical results. With the help of this theory, a preliminary theoretical analysis of the problem is carried out and a system of quantities characterizing phenomena and processes is selected. It is the basis for planning experiments and processing research results. Together with physical laws, differential equations and experiment, similarity theory allows one to obtain quantitative characteristics of the phenomenon being studied.

Formulating a problem and establishing an experimental plan based on the theory of similarity is greatly simplified due to the functional relationship between the set of quantities that determine the phenomenon or behavior of the system. As a rule, in this case we are not talking about studying separately the influence of each parameter on the phenomenon. It is very important that results can be achieved with just one experiment on such systems.

The properties of similar phenomena and criteria for the similarity of the phenomena being studied are characterized by three similarity theorems.

First similarity theorem. The first theorem, established by J. Bertrand in 1848, is based on general concept Newton's dynamic similarity and his second law of mechanics. This theorem is formulated as follows: for similar phenomena, you can find a certain set of parameters, called similarity criteria, that are equal to each other.

Let's look at an example. Let two bodies having masses m 1 and m 2, move with accelerations accordingly A 1 and A 2 under the influence of forces f 1 and f 2. The equations of motion are:

By propagating the result to n similar systems, we obtain the similarity criterion:

(4.31)

It was agreed upon to denote the similarity criterion by the symbol P, then the result of the above example will be written:

Thus, in such phenomena, the ratio of parameters (similarity criteria) are equal to each other and for these phenomena it is true. The converse statement also makes sense. If the similarity criteria are equal, then the phenomena are similar.

The found equation (4.32) is called Newton's dynamic similarity criterion, it is similar to expression (4.29) obtained using the dimensional analysis method, and is a special case of the thermodynamic similarity criterion based on the law of conservation of energy.

When studying a complex phenomenon, several different processes may develop. The similarity of each of these processes is ensured by the similarity of the phenomenon as a whole. From a practical point of view, it is very important that similarity criteria can be transformed into criteria of another type using division or multiplication by a constant k. For example, if there are two criteria P 1 and P 2, the following expressions are valid:

If similar phenomena are considered in time and space, we are talking about the criterion of complete similarity. In this case, the description of the process is most complex; it allows not only the numerical value of the parameter (the impact force of the blast wave at a point 100 m from the explosion site), but also the development, change of the parameter in question over time (for example, an increase in impact force, speed process attenuation, etc.).

If such phenomena are considered only in space or time, they are characterized by criteria of incomplete similarity.

Most often, approximate similarity is used, in which parameters that influence this process to a small extent are not considered. As a result, the research results will be approximate. The degree of this approximation is determined by comparison with practical results. In this case we are talking about criteria of approximate similarity.

Second similarity theorem ( P – theorem). It was formulated at the beginning of the 20th century by scientists A. Federman and W. Buckingham as follows: every complete equation physical process can be presented in the form of () criteria (dimensionless dependencies), where m is the number of parameters, and k is the number of independent units of measurement.

Such an equation can be solved with respect to any criterion and can be presented in the form of a criterion equation:

. (4.34)

Thanks to P- theorem, it is possible to reduce the number of variable dimensional quantities to () dimensionless quantities, which simplifies data analysis, experimental planning and processing of its results.

Typically, in mechanics, three quantities are taken as the basic units: length, time and mass. Then, when studying a phenomenon that is characterized by five parameters (including a dimensionless constant), it is enough to obtain the relationship between the two criteria.

Let's consider an example of reducing quantities to dimensionless form, usually used in the mechanics of underground structures. The stressed deformed state of the rocks around the excavation is predetermined by the weight of the overlying strata γH, Where γ – volumetric weight of rocks, N– the depth of the excavation from the surface; strength characteristics of rocks R; support resistance q; displacements of the excavation contour U; the size of the workings r; deformation modulus E.

In general, the dependence can be written as follows:

In accordance with P- theorem system of P parameters and one determined quantity should give dimensionless combinations. In our case, time is not taken into account, therefore, we get four dimensionless combinations.

from which we can create a simpler dependence:

Third similarity theorem. This theorem was formulated by Acad. V.L. Kirpichev in 1930 as follows: a necessary and sufficient condition for similarity is the proportionality of similar parameters that form part of the condition of unambiguity, and the equality of similarity criteria for the phenomenon being studied.

Two physical phenomena are similar if they are described by the same system of differential equations and have similar (boundary) conditions of uniqueness, and their defining criteria of similarity are numerically equal.

The conditions of unambiguity are the conditions by which a specific phenomenon is distinguished from the entire set of phenomena of the same type. The similarity of unambiguity conditions is established in accordance with the following criteria:

– similarity of geometric parameters of systems;

– proportionality of physical constants that are of primary importance for the process being studied;

– similarity of initial conditions of systems;

– similarity of the boundary conditions of the systems throughout the entire period under consideration;

– equality of criteria that are of primary importance for the process being studied.

The similarity of two systems will be ensured if their similar parameters are proportional and the similarity criteria determined using P- theorems from the complete equation of the process.

There are two types of problems in similarity theory: direct and inverse. The direct task is to determine the similarity when known equations. Inverse problem consists in establishing an equation that describes the similarity of similar phenomena. Solving the problem comes down to determining similarity criteria and dimensionless proportionality coefficients.

The problem of finding the process equation using P- The theorem is solved in the following order:

– determine by one method or another all the parameters that influence this process. One of the parameters is written as a function of other parameters:

(4.35)

– assume that equation (4.35) is complete and homogeneous with respect to dimension;

– choose a system of units of measurement. In this system, independent parameters are selected. The number of independent parameters is equal to k;

– compose a matrix of dimensions of the selected parameters and calculate the determinant of this matrix. If the parameters are independent, then the determinant will not be equal to zero;

– find combinations of criteria using the dimensional analysis method, their number in the general case is equal to k–1;

– determine proportionality coefficients between criteria using experiment.

Mechanical similarity criteria. In mining science, mechanical similarity criteria are most widely used. It is believed that other physical phenomena (thermal, electrical, magnetic, etc.) do not affect the process being studied. To obtain the necessary criteria and constant similarities, Newton's law of dynamic similarity and the method of dimensional analysis are used.

The basic units are length, mass and time. All other characteristics of the process under consideration will depend on these three basic units. Therefore, mechanical similarity sets criteria for length (geometric similarity), time (kinematic similarity) and mass (dynamic similarity).

Geometric similarity two similar systems will occur if all dimensions of the model are changed in C l times in relation to a system having real dimensions. In other words, the ratio of distances in real life and on a model between any pair of similar points is a constant value, called geometric scale :

. (4.36)

The ratio of the areas of similar figures is equal to the square of the proportionality coefficient, the ratio of volumes is .

Kinematic similarity condition will take place if similar particles of systems, moving along geometrically similar trajectories, travel geometrically similar distances over time intervals t n in kind and t m for models that differ in proportionality coefficient:

(4.37)

Dynamic similarity condition will take place if, in addition to conditions (4.36) and (4.37), the masses of similar particles of similar systems also differ from each other by the coefficient of proportionality:

. (4.38)

Odds C l , Ct, And Cm called similarity coefficients.

2.1. PROPERTIES OF REGULATED OBJECTS

( Modern automatic control systems (ACS) usually use commercially produced regulators. The block diagram of such a system is shown in Fig. 1.

Here O is the control object;

PR – industrial regulator;

X(t) – control action;

Y(t) – process at the output of the object;

f(t) – disturbing influence;

E(t) = X(t) - У(t) – deviation of the controlled process from the specified one (control error);

μ (t) – regulatory influence on the object.

Industrial regulators are universal devices designed to regulate a wide variety of quantities and objects. Their design is such that various measuring transducers and actuators can be connected to them. They consist of separate blocks that perform specific operations (amplification, addition, integration, etc.). From these blocks you can assemble circuits that implement almost any regulatory laws. Modern industrial controllers are based on microcontrollers.

The dynamic properties of the ACS depend on the characteristics of the object and the controller. All ATS parameters can be divided into three groups:

Specified parameters that cannot be changed (for example, static and dynamic parameters of an object);

Parameters that can be selected by the designer during development
regulator, but cannot be changed during setup;

Parameters that can be changed during setup (setup).

When developing an ACS based on an industrial regulator, the task arises of determining and setting the tuning parameters of the regulator according to the given parameters of the object. This problem is solved in the following order:

Based on information about the controlled object, the nature of disturbances, control actions, etc. a fairly simple standard regulation law is selected;

The optimal regulator settings are calculated;

The quality of the system is re-analyzed;

If the system does not satisfy the task, choose more
complex regulatory law;

If this measure does not give satisfactory results, the structure of the control system is complicated (additional control loops are introduced, the nature of the impact of disturbances is clarified, etc.).

The dynamic properties of the control object influence the type of transient process.

The properties of the object must be known when developing an automation scheme, choosing the law of operation of the regulator and determining the optimal values ​​of its setting parameters. Correct consideration of the properties of an object allows you to create an automatic control system with high quality indicators of the transient process.


The main properties of control objects are: self-leveling, capacity and delay.

Self-leveling call the property of an object to independently come to an equilibrium state after a change in the input influence. In objects with self-leveling, a step change in the input value leads to a change in the output value at a rate gradually decreasing to zero, which is associated with the presence of internal negative feedback. The greater the degree of self-leveling, the smaller the deviation of the output value from the original value. Self-leveling of an object thus characterizes it sustainability.

SELF-LEVELING OBJECT

Object - container E (Fig. 1, a); inlet flow – F inx ; output flow - F vyx . Let's consider the dependence of the level change L , when it changes F inx And F vyx those. . With increasing flow F inx (Fig. 1, b), at a point in time t 1 the level begins to increase; at the same time, the hydrostatic pressure of the liquid column increases, which causes an increase in flow rate F vyx , which tends to flow F inx. The level increases, but at the moment of equality of expenses it comes to a steady value

Figure 1 Scheme of an object with self-leveling (a) and graph (b)

OBJECT WITHOUT SELF-LEVELING

At the outlet of the container E pump installed H , with performance F vyx (Fig. 2, a). With increasing flow F inx ; at a point in time t 1 consumption F vyx does not change, which causes an increase in the level (Fig. 2, b). This object can be represented by an integrating link.

Capacity With characterizes the inertia of the object, i.e. degree of influence of the input quantity x on the rate of change of output dy/dt . . (1)

The more capacity , the lower the rate of change of the output value of the object and vice versa. The capacity of an object is a property inherent in all technological objects.

Figure 2 Scheme of an object without self-leveling (a) and graph (b)

Lag object is expressed in the fact that its output value at begins to change not immediately after the disturbance is applied, but only after a certain period of time t , called lag time . All real oil and industrial objects have a delay and require time for the signal to travel from the place where the disturbance is applied to the place where the change in the output value is recorded. Denoting this distance by l (Fig. 3, a), and the speed of signal passage through V , let's express the delay time t in the following way

As an example of an object with delay, we can consider a pipeline of length l , the input of which receives a product with a flow rate F in, and at the pipeline outlet we have F vyx (see Fig. 3, A). In Fig. 3, b change graph presented F in at a point in time t 1. Change F vyx occurs with some delay t at a point in time t 2 . The delay is determined by the time difference (3) The properties of objects have a significant impact on the quality of the transition process of the ACS and on the choice of the control law.

Influence self-leveling object is similar to the action of an automatic regulator.

Thus, objects that do not have self-leveling do not provide stable operation on their own and require the mandatory use of an automatic regulator. Moreover, not every regulator can cope with the task of managing such objects. Thus, the absence of self-leveling in objects complicates the task of regulation, and its presence facilitates the task of maintaining the controlled parameter at a given value. The higher the degree of self-leveling, the simpler methods can be used to ensure the required quality of regulation.

Capacity objects influences the choice of controller type. The smaller it is, i.e. The greater the rate of change in the output value of an object for a given load change, the greater the degree of influence on the object the regulator should have.

Availability delays in ACS complicates the task of regulating a technological parameter in a facility. Therefore, it is necessary to strive to reduce it: install the measuring transducer and the system actuator as close as possible to the controlled object, use low-inertia measuring and standardizing transducers, etc.

Figure 3 Scheme of an object with a delay (a) and graph (b)

The properties of objects are determined by analytical, experimental and experimental-analytical methods.

Analytical method consists in drawing up a mathematical description of the object, in which the equation of statics and dynamics is found based on a theoretical analysis of the physical and chemical processes occurring in the object under study, taking into account the design of the equipment and the characteristics of the processed substances.

Analytical method used in the design of control systems for technological objects, the physical and chemical processes of which have been sufficiently well studied. It allows you to predict the operation of objects in static and dynamic modes, but is associated with the difficulty of solving and analyzing the equations compiled and requires special research to determine the values ​​of the coefficients of these equations

Experimental method consists of determining the characteristics of a real object by performing a special experiment on it. The method is quite simple, has low labor intensity, and allows one to fairly accurately determine the properties of a particular object. At experimental method it is impossible to identify functional connections between the properties of processed and obtained substances and performance indicators technological process and structural characteristics of the object. This drawback does not allow the results obtained by the experimental method to be extended to other similar objects.

Experimental-analytical method consists in composing equations by analyzing the phenomena occurring in an object, while the numerical values ​​of the coefficients of the resulting equations are determined experimentally on a real object. Being a combination of analytical and experimental methods for determining the properties of objects, this method takes into account their advantages and disadvantages.

Experimental method

Structural-analytical method

It is known that natural science owes its development to the use of experiment. An experiment differs from simple observation in that a researcher, studying a phenomenon, can arbitrarily change the conditions under which it occurs, and, observing the results of such an intervention, draw conclusions about the patterns of the phenomenon being studied. For example, an experimenter can study the rate of reaction in response to signals of different intensities given to him. Or, let’s say, study the actions of a subject who needs to find a way out of a maze different levels difficulties. In this case, the experimenter observes and records what techniques, means and forms of behavior the subject uses when getting out of the proposed labyrinths. Further analysis of the results obtained, in which the experimenter traces the structural structure of the techniques used by the subject, is called the method of structural analysis.

In the examples given, we were talking about direct experiments in which the researcher, actively changing the conditions of the subjects’ activities, observed their behavior. Typically, such studies are carried out in so-called laboratory conditions. Hence the experiment received the name laboratory. They often use special equipment, the experiment is clearly planned, and the subject is included in the experiment voluntarily and knows that he is being studied.

All psychophysics, psychophysiology, as well as many studies general psychology(memory, attention, thinking) are carried out in laboratory conditions. These experiments are not controversial when their purpose is to study externally observable reactions or behaviors. But is it possible to experimentally study mental phenomena themselves: perceptions, experiences, imagination, thinking? After all, they are inaccessible to direct observation, and to conduct an experiment it is necessary to change the conditions for these processes. Indeed, this is not possible directly, but it is possible indirectly if we obtain the consent of the subject for such an experiment and with his help, relying on his introspection ( subjective method), we will change the conditions for the occurrence of mental processes in his consciousness.

Experimental genetic method

Along with the structural-analytical method, the experimental-genetic method is widely used in psychology, which has especially great importance for child (genetic) psychology. With its help, the experimenter can investigate the origin and development of certain mental processes in a child, study which stages are included in it, what factors determine it. The answer to these questions can be obtained by tracing and comparing how the same tasks are performed at successive stages of child development. This approach is called genetic (or cross-sectional) cross-sectioning in psychology. Another modification of the experimental genetic method is a longitudinal study, i.e. long-term and systematic study of the same subjects, allowing us to determine age-related and individual variability of phases life cycle person.

Longitudinal research is often conducted under the conditions of a natural experiment, which was proposed in 1910 by A.F. Lazursky. Its meaning is to eliminate the stress that a person experiences when he knows that he is being experimented on, and to transfer the research to ordinary, natural conditions (a lesson, an interview, a game, homework, etc.).

An example of a natural experiment is a study of memorization productivity depending on the setting for the duration of retention of material in memory. In a lesson in two classes, students are introduced to the material that needs to be studied. The first class is told that they will be surveyed the next day, and the second class is told that the survey will be in a week. In fact, both classes were surveyed two weeks later. This natural experiment revealed the benefits of setting the mindset to retain material in memory for a long time.

In developmental and educational psychology, a combination of structural-analytical and experimental-genetic methods is often used. For example, in order to identify how this or that mental activity is formed, the subject is placed in various experimental conditions, asked to solve certain problems. In some cases, he is required to make an independent decision, in others he is provided with various kinds of hints. The experimenter, observing the activities of the subjects, determines the conditions under which the subject can optimally master this activity. At the same time, using the techniques of the experimental genetic method, it turns out to be possible to experimentally form complex mental processes and explore their structure more deeply. In educational psychology, this approach is called a formative experiment.

Experimental genetic methods were widely used in the works of J. Piaget, L.S. Vygotsky, P.P. Blonsky, S.L. Rubinshteina, A.V. Zaporozhets, P.Ya. Galperina, A.N. Leontyev. A classic example of the use of the genetic method is the study of L.S. Vygotsky’s egocentric speech of the child, that is, speech addressed to oneself, regulating and controlling the child’s practical activities. L.S. Vygotsky showed that genetically egocentric speech goes back to external (communicative) speech. The child addresses himself out loud in the same way as one of the parents or raising adults addressed him. However, every year the child’s egocentric speech becomes more and more abbreviated and therefore incomprehensible to others, and by the beginning school age stops completely. The Swiss psychologist J. Piaget believed that by this age egocentric speech simply dies out, but L.S. Vygotsky showed that it does not disappear, but moves into the internal plane, becomes inner speech that plays important role in self-management of one's behavior. Internal pronunciation, “speech to oneself,” retains the structure of external speech, but is devoid of phonation, i.e. pronouncing sounds. It forms the basis of our thinking when we pronounce to ourselves the conditions or process of solving a problem.

The key to the success of an experiment lies in the quality of its planning. Effective experimental designs include the simulated pretest-posttest design, the posttest-control group design, the pretest-posttest-control group design, and the Solomon four-group design. These designs, unlike quasi-experimental designs, provide O greater confidence in the results by eliminating the possibility of some threats to internal validity (i.e., premeasurement, interaction, background, natural history, instrumental, selection, and attrition)."

The experiment consists of four main stages, regardless of the subject of study and who is carrying it out. So, when conducting an experiment, you should: determine what exactly needs to be learned; take appropriate action (conduct an experiment manipulating one or more variables); observe the effect and consequences of these actions on other variables; determine the extent to which the observed effect can be attributed to the actions taken.

To be sure that the observed results are due to the experimental manipulation, the experiment must be valid. It is necessary to exclude factors that may affect the results. Otherwise, it will not be known what to attribute differences in the attitudes or behavior of respondents observed before and after the experimental manipulation: the manipulation process itself, changes in measurement instruments, recording techniques, data collection methods, or inconsistent interview conduct.

In addition to experimental design and internal validity, the researcher needs to determine the optimal conditions for conducting the planned experiment. They are classified according to the level of reality of the experimental setting and environment. This is how laboratory and field experiments are distinguished.

Laboratory experiments: advantages and disadvantages

Laboratory experiments are typically conducted to evaluate price levels, alternative product formulations, creative advertising designs, and packaging designs. Experiments allow you to test different products and advertising approaches. During laboratory experiments, psychophysiological reactions are recorded, the direction of gaze or the galvanic skin reaction are observed.

When conducting laboratory experiments, researchers have sufficient opportunities to control its progress. They can plan the physical conditions for carrying out experiments and manipulate strictly defined variables. But the artificiality of the laboratory experimental setting usually creates an environment that differs from real-life conditions. Accordingly, in laboratory conditions, the reaction of respondents may differ from the reaction in natural conditions.

As a consequence, well-designed laboratory experiments usually have high degree internal validity, a relatively low degree of external validity, and a relatively low level of generalizability.

Field experiments: advantages and disadvantages

Unlike laboratory experiments, field experiments are characterized by a high level of realism and a high level of generalizability. However, when they are carried out, threats to internal validity may arise. It should also be noted that conducting field experiments (very often in places of actual sales) takes a lot of time and is expensive.

Today, controlled field experiment is the best tool in marketing research. It allows you to both identify connections between cause and effect and accurately project the results of an experiment onto a real target market.

Examples of field experiments include test markets and electronic test markets.

To experiments on test markets are used when evaluating the introduction of a new product, as well as alternative strategies and advertising campaigns before launching a national campaign. In this way, alternative courses of action can be assessed without large financial investments.

A test market experiment typically involves purposive selection of geographic areas to obtain representative, comparable geographic units (cities, towns). Once potential markets are selected, they are assigned to experimental conditions. It is recommended that “for each experimental condition there should be at least two markets. In addition, if it is desired to generalize the results to the entire country, each of the experimental and control groups should include four markets, one from each geographical region countries".

A typical test market experiment can take anywhere from a month to a year or more to complete. Researchers have at their disposal test markets at the point of sale and simulated test markets. The test market at the point of sale usually has quite high level external validity and average level internal validity. The simulated test market has the strengths and weaknesses of laboratory experiments. This is a relatively high level of internal validity and relatively low level external validity. Compared to point-of-sale test markets, simulated test markets provide O greater ability to control extraneous variables, results come faster and the cost of obtaining them is lower.

Electronic trial market is “a market in which a market research company can monitor the advertising broadcast in each member's home and track the purchases made by members of each household.” Research conducted in an electronic test market correlates the type and quantity of advertising seen with purchasing behavior. The goal of electronic trial market research is to increase control over the experimental situation without sacrificing generalizability or external validity.

During an electronic test market experiment conducted within a limited number of markets, the television signal sent to participants' apartments is monitored and the purchasing behavior of individuals living in those apartments is recorded. Electronic test market research technologies allow the commercials shown to each individual family to be varied, comparing the response of the test group to a control group. Typically, research into a trial electronic market lasts six to twelve months.

More detailed information on this topic can be found in the book by A. Nazaikin