We use cookies in order to improve the quality and usability of the HSE website. More information about the use of cookies is available here, and the regulations on processing personal data can be found here. By continuing to use the site, you hereby confirm that you have been informed of the use of cookies by the HSE website and agree with our rules for processing personal data. You may disable cookies in your browser settings.
✖An effective calculation of the Reed-Solomon code syndrome is proposed. The method is based on the use of the partial normalized cyclic convolutions in the partial inverse cyclotomic discrete Fourier transform. The method is the best of the known algorithms, in terms of multiplicative complexity.
In each node of a network, economy is described by the simple two-period Romer’s model of endogenous growth with production and knowledge externalities. The sum of knowledge levels in the neighbor nodes causes an externality in the production of each node of the network. The game equilibrium in the network is investigated. The agents’ solutions depending on the size of externality are obtained. The uniqueness of inner equilibrium is proved. The role of passive agents in network formation is studied; in particular, the possibilities of adding a passive agent to a regular network, and also of joining of regular networks through nodes with passive agents. It is shown that the sum of knowledge levels in all the nodes decreases under adding of a new link.
Topic modeling is a popular approach for clustering text documents. However, current tools have a number of unsolved problems such as instability and a lack of criteria for selecting the values of model parameters. In this work, we propose a method to solve partially the problems of optimizing model parameters, simultaneously accounting for semantic stability. Our method is inspired by the concepts from statistical physics and is based on Sharma–Mittal entropy. We test our approach on two models: probabilistic Latent Semantic Analysis (pLSA) and Latent Dirichlet Allocation (LDA) with Gibbs sampling, and on two datasets in different languages. We compare our approach against a number of standard metrics, each of which is able to account for just one of the parameters of our interest. We demonstrate that Sharma–Mittal entropy is a convenient tool for selecting both the number of topics and the values of hyper-parameters, simultaneously controlling for semantic stability, which none of the existing metrics can do. Furthermore, we show that concepts from statistical physics can be used to contribute to theory construction for machine learning, a rapidly-developing sphere that currently lacks a consistent theoretical ground.
In the framework of this paper we apply multifractal formalism to the analysis of statistical behaviour of topic models under variation of the number of topics. Fractal analysis of topic models allows to show that self-similar fractal clusters exist in large textual collections. We provide numerical results for 3 topic models (PLSA, ARTM, LDA Gibbs sampling) on 2 datasets, namely, on an English-language dataset and on a Russian-language dataset. We demonstrate that forming of clusters occurs precisely in the transition regions. Linear regions do not lead to changes in fractals, therefore, it is sufficient to find transition regions for the study of textual collections. Accordingly, the problem of the analysing the evolution of topic models can be reduced to the problem of searching transition regions in topic models.
In this paper, we consider the following problem - what affects the Nash equilibrium amount of investment in knowledge when one of the complete graph enters another full one. The solution of this problem will allow us to understand exactly how game agents will behave when deciding whether to enter the other net, what conditions and externalities affect it and how the level of future equilibrium amount of investments in knowledge can be predicted.
In order to find an optimal and time consistent cooperative path in multicriteria multistage game the minimal sum of relative deviations rule is introduced. Using this rule one can construct a vector-valued characteristic function that is weakly superadditive. The sustainability of the cooperative agreement is ensured by using an imputation distribution procedure (IDP) based approach. We formulate the conditions an IDP should satisfy to guarantee that the core is strongly time consistent (STC). Namely, if the imputation distribution procedure for the Shapley value satisfies the efficiency condition, the strict balance condition and the strong irrational-behavior-proof condition, given that the Shapley value belongs to the core of each subgame along the cooperative path, it can be used as a “supporting imputation” which guarantees that the whole core is STC. We discuss three payment schedules and check whether they can be used as supporting imputation distribution procedures for the considered multicriteria game.
√√ Abstract Let f(a,b,c,d) = a2+b2 + c2+d2 − (a+c)2+(b+d)2, let
(a,b,c,d) stand for a,b,c,d ∈ Z0 such that ad − bc = 1. Defines
In other words, we consider the sum of the powers of the triangle inequality defects for the lattice parallelograms (in the first quadrant) of area one. We prove that F(s) converges when s > 1 and diverges at s = 1/2. We also prove that
1=1,(a,b,c,d) (a+c)2(b+d)2(a+b+c+d)2 3
Game theory has recently become a useful tool for modeling and studying various networks. The past decade has witnessed a huge explosion of interest in issues that intersect networks and game theory. With the rapid growth of data traffic, from any kind of devices and networks, game theory is requiring more intelligent transformation. Game theory is called to play a key role in the design of new generation networks that are distributed, self-organizing, cooperative, and intelligent. This book consists of invited and technical papers of GAMENETS 2018, and contributed chapters on game theoretic applications such as networks, social networks, and smart grid.
This paper considers a network game as follows. In each node of a network, economy is described by the simple two-period Romer’s model of endogenous growth with production and knowledge externalities. The sum of knowledge levels in the neighbor nodes causes an externality in the production of each network node. The concept of node type is introduced and a corresponding typology of networks is suggested. As demonstrated below, all inner equilibria of the game are determined by this typology. For several typologies, the equilibrium knowledge levels are found in explicit form for the nodes that have different positions in the network.
Cloud applications bring new challenges to the design of network elements, in particular the burstiness of traffic workloads. A shared memory switch is a good candidate architecture to exploit buffer capacity; in this work, we analyze the performance of this architecture. Our goal is to explore the impact of additional traffic characteristics such as varying processing requirements and packet values on objective functions. The outcome of this work is a better understanding of the relevant parameters for buffer management to achieve better performance in dynamic environments of data centers. We consider a model that captures more of the properties of the target architecture than previous work and consider several scheduling and buffer management algorithms that are specifically designed to optimize its performance. In particular, we provide analytic guarantees for the throughput performance of our algorithms that are independent from specific distributions of packet arrivals. We furthermore report on a comprehensive simulation study which validates our analytic results.
This study proposes to minimize Rényi and Tsallis entropies for finding the optimal number of topics T in topic modeling (TM). A promising tool to obtain knowledge about large text collections, TM is a method whose properties are underresearched; in particular, parameter optimization in such models has been hindered by the use of monotonous quality functions with no clear thresholds. In this research, topic models obtained from large text collections are viewed as nonequilibrium complex systems where the number of topics is regarded as an equivalent of temperature. This allows calculating free energy of such systems—a value through which both Rényi and Tsallis entropies are easily expressed. Numerical experiments with four TM algorithms and two text collections show that both entropies as functions of the number of topics yield clear minima in the middle area of the range of T. On the marked-up dataset the minima of three algorithms correspond to the value of T detected by humans. It is concluded that Tsallis and especially Rényi entropy can be used for T optimization instead of Shannon entropy that decreases even when T becomes obviously excessive. Additionally, some algorithms are found to be better suited for revealing local entropy minima. Finally, we test whether the overall content of all topics taken together is resistant to the change of T and find out that this dependence has a quasi-periodic structure which demands further research.
For more than a century, the constructive description of functional classes in terms of the possible rate of approximation of its functions by means of functions chosen from a certain set remains among the most important problems of approximation theory. It turns out that the nonuniformity of the approximation rate due between the points of the domain of the approximated function is substantial. For instance, it was only in the mid-1950s that it was possible to constructively describe Holder classes on the segment [–1; 1] in terms of the approximation by algebraic polynomials. For that particular case, the constructive description requires the approximation at neighborhoods of the segment endpoints to be essentially better than the one in a neighborhood of its midpoint. A possible approximation quality test is to find out whether the approximation rate provides a possibility to reconstruct the smoothness of the approximated function. Earlier, we investigated the approximation of classes of smooth functions on a countable union of segments on the real axis. In the present paper, we prove that the rate of the approximation by the entire exponential-type functions provides the possibility to reconstruct the smoothness of the approximated function, i.e., a constructive description of classes of smooth functions is possible in terms of the specified approximation method. In an earlier paper, that result is announced for Holder classes, but the construction of a certain function needed for the proof is omitted. In the present paper, we use another proof; it does not apply the specified function.
In this paper, an approximation of functions of extensive classes set on a countable unit of segments of a real axis using the entire functions of exponential type is considered. The higher the type of the approximating function is, the higher the rate of approximation near segment ends can be made, compared with their inner points. The general approximation scale, which is nonuniform over its segments, depending on the type of the entire function, is similar to the scale set out for the first time in the study of the approximation of the function by polynomials. For cases with one segment and its approximation by polynomials, this scale has allowed us to connect the so-called direct theorems, which state a possible rate of smooth function approximation by polynomials, and the inverse theorems, which give the smoothness of a function approximated by polynomials at a given rate. The approximations by entire functions on a countable unit of segments for the case of Hölder spaces have been studied by the authors in two preceding papers. This paper significantly expands the class of spaces for the functions, which are used to plot an approximation that engages the entire functions with the required properties.
We study game equilibria in a model of production and externalities in network with two types of agents who possess different productivities. Each agent may invest a part of her endowment (for instance, time or money) at the first stage; consumption at the second period depends on her own investment and productivity as well as on the investments of her neighbors in the network. Three ways of agent’s behavior are possible: passive (no investment), active (a part of endowment is invested) and hyperactive (the whole endowment is invested). We introduce adjustment dynamics and study consequences of junction of two regular networks with different productivities of agents. In particular, we study how the behavior of nonadopters (passive agents) changes when they connect to adopters (active or hyperactive) agents.
A new strongly time-consistent (dynamically stable) optimality principle is proposed in a cooperative differential game. This is done by constructing a special subset of the core of the game. It is proposed to consider this subset as a new optimality principle. The construction is based on the introduction of a function V^ that dominates the values of the classical characteristic function in coalitions. Suppose that V (S, x¯ (τ), T −τ) is the value of the classical characteristic function computed in the subgame with initial conditions x¯ (τ), T −τ on the cooperative trajectory. Define V^(S;X0,T−t0)=maxt0≤τ≤TV(S;x∗(τ),T−τ)V(N;X∗(τ),T−τ)V(N;x0,T−t0) Using this function, we construct an analog of the classical core. It is proved that the constructed core is a subset of the classical core; thus, we can consider it as a new optimality principle. It is also proved that the newly constructed optimality principle is strongly time-consistent.
This work is devoted to study cooperative solutions in the games with Looking Forward Approach. LFA is used for constructing game theoretical models and defining solutions for conflict-controlled processes where information about the process updates dynamically. We suppose that during the game players lack certain information about the motion equation and payoff function. At each instant players possess only the truncated information about the game structure. At a given instants information about the game updates, players receive new updated information and adapt. Described model cannot be formulized using classical differential game technics. The new resulting cooperative solution for LFA models is presented and studied
The paper deals with methods of computation of distributions of functionals of switching diffusions. The switching between two collections of diffusion coefficients occurs at the Poisson time moments which are independent of the initial diffusions. One can also consider more general diffusions when the choice is made from three or more collections of diffusion coefficients.
We use social media and WWW data to analyse international educational migration from Russia. We find substantial regional differences in migration patterns for three contrast directions: the Nordic countries, China and the Middle East. We built a model of migration flows with geographic distances to destination countries, various socio-demographic data and institutional characteristics of educational organisations.
A one-parameter family of Mackey-Glass type differential delay equations is considered. The existence of a homoclinic solution for suitable parameter value is proved. As a consequence, one obtains stable periodic solutions for nearby parameter values. An example of a nonlinear functions is given, for which all sufficient conditions of our theoretical results can be verified numerically. Numerically computed solutions are shown.