Explore chapters and articles related to this topic
Filtering, Smoothing, and Control in Discrete-Time Stochastic Distributed-Sensor Networks
Published in Spyros G. Tzafestas, Keigo Watanabe, Stochastic Large-Scale Engineering Systems, 2020
Keigo Watanabe, Spyros G. Tzafestas
A variant of the problem above is the real-time smoothing problem, which is a unique feature of the two-filter form. In this case, given Z1 and Zk2, we would like to compute x^RS(k≜E[x(k)|Z1,Zk2]
Just-in-Time and Kanban
Published in Susmita Bandyopadhyay, Production and Operations Analysis, 2019
Yavuz and Tufekci (2006) dealt with the batch production smoothing problem (BPSP). The authors have applied bounded dynamic programming approach. The paper only considers the situation of mixed model production system that produces significant number of products and the demand for any single product is so low that dedicated production line is not a realistic idea. The aim of production smoothing is to reduce batch size leading to “one-piece-flow of products, parts, and materials through the entire system.” Jewkes and Power (1993) had investigated into the justification of investment in JIT. The firm under study was assumed to have monopoly power in case of purchasing of raw materials. The paper mainly found the benefits of JIT implementation. The authors had observed a significant number of analytical and simulation studies done on the investigation for the success of JIT implementation. The purpose of Li et al. (2006) was to find the optimal production rate for each product in a mass manufacturing system that follows JIT philosophy. The paper aimed to integrate Manufacturing Resource Planning (MRP-II) and JIT manufacturing. The authors proposed to apply Goal Programming in order to find the optimal production rate.
Optimal filtering and prediction
Published in Arthur E. Bryson, Yu-Chi Ho, Applied Optimal Control, 2018
If we believe that we understand the dynamics of the ideal system (with perfect and complete measurements and no random disturbances), and if we believe that we have some knowledge of the degree of uncertainty in the measurements and of the degree of intensity of the random disturbances to the system, then, on the basis of all the measurements up to the present time, we can determine the most likely values of the state variables. The process of determining these most likely values is called smoothing, filtering, or prediction, depending on whether we are finding past, present, or future values of the state variables, respectively. In this chapter, the filtering and prediction problems are treated. The results will be directly applicable to stochastic control problems. The smoothing problem is dealt with in Chapter 13.
An adaptive binned kernel density estimation method for analysis of wave energy converters
Published in Ships and Offshore Structures, 2023
The IFORM and ISORM for environmental contours are both parametric methods that require some specific predefined probability distribution models for the sea state parameters. However, a drawback to parametric modelling is that the requirements on the predefined probability model may be too restrictive and rigid for adequately estimating the true underlying function. To overcome the rigidity of the parametric sea state parameter probability distribution models the non-parametric KDE (Kernel Density Estimation) method can be utilized. The Kernel Density Estimation is a fundamental data smoothing problem in which inferences about the population are made based on a data sample of finite size. Suppose (, … ) are finite data samples drawn from an unknown univariate probability density distribution at any given point . The researchers are interested in estimating the shape of this function. Its ordinary kernel density estimator is (Eckert-Gallup and Martin (2016); Eckert-Gallup et al. (2021); Silverman (1986)): where K is the kernel, a non-negative function satisfying , and h > 0 is a smoothing parameter called the bandwidth.
A review of challenges and solutions in the design and implementation of deep graph neural networks
Published in International Journal of Computers and Applications, 2023
Aafaq Mohi ud din, Shaima Qureshi
Over-smoothing: The over-smoothing problem in GNNs was first highlighted in [22]. GNNs latent node representations become increasingly similar over successive steps of message passing. Once these representations are over smoothed, adding further steps does not add expressive capacity, and so performance doesn't improve. All node representations converge to a static point when models have an infinite number of layers, rendering them completely indistinguishable [23]. Over-smoothing is the term for this phenomenon, which has been thoroughly detailed in [22–28] from different perspectives as: indistinguishable representations of nodes in different classes by [22], all node representations converging to stationary subspace or point by [23], the graph's nodes that are close to each other become more similar [24], all node's representations will converge to a stationary point, making them unrelated to the node features [25], over-mixing of information and noise [26], mixing up of hidden features of different nodes by [27], within each connected component of the network the features of nodes converge to a single value by [28].
Bivariate kernel density estimation for environmental contours at two offshore sites
Published in Ships and Offshore Structures, 2022
In statistics, kernel density estimation is a non-parametric way to estimate the probability density function of a random variable. Kernel density estimation is a fundamental data smoothing problem where inferences about the population are made, based on a finite data sample. We begin with the formulations for the univariate kernel density estimator. Suppose we have a random sample , … taken from a continuous, univariate, unknown density . We are interested in estimating the shape of this function . Its univariate kernel density estimator is (Silverman (1986)): where K is a non-negative function satisfying , which we call the kernel function, and h > 0 is a smoothing parameter called the bandwidth parameter. A range of kernel functions are commonly used: uniform, triangular, biweight, triweight, Epanechnikov, normal, and others. The Epanechnikov kernel is optimal in a mean square error sense, and therefore it is utilised in this study. The mathematical formulation of the univariate Epanechnikov kernel is as follows: