Explore chapters and articles related to this topic
Taxonomy of Routing Protocols for Opportunistic Networks
Published in Khaleel Ahmad, Nur Izura Udzir, Ganesh Chandra Deka, Opportunistic Networks, 2018
Khaleel Ahmad, Muneera Fathima, Khairol Amali bin Ahmad
X. Fang et al. proposed the protocol Consort. It mainly focuses on the problem of how to choose an opportunistic route for every user so that the total utility in a WMN is increased with respect to node constraints. By combining primal-dual and subgradient methods, a distributed algorithm is designed, i.e. Consort. At every iteration, Consort updates the Lagrange multipliers in a distributed manner according to their user and node behavior, which is received from its previous iteration. Depending on the received Lagrange multipliers, the nodes adjust their own behavior and then update the Lagrange multipliers. Consort gives results close to optimal solution (Fang, Yang & Xue, 2011).
Resource Optimization for Distributed Video Communications
Published in Ling Guan, Yifeng He, Sun-Yuan Kung, Multimedia Image and Video Processing, 2012
In the P2P application-layer overlay, multiple content distribution or media streaming sessions are expected to be running concurrently. It is important to provide differentiated services among the sessions. For example, a live streaming session should be handled with a higher priority than a content distribution session of bulk data. The problem of service differentiation across P2P communication sessions is examined in Ref. [5]. In order to select better paths and guarantee the rate for high-priority sessions, a utility function at link l for session s is given by Uls=Cslog(1+Qlxls), where Ql denotes the quality of link l, Cs is the priority of session s, and xls denotes the bandwidth allocation to session s at link l. The optimization problem of service differentiation is formulated to maximize the summation of the utilities among all the sessions with the constraints of the heterogeneous upload and download capacities at each peer. A fully distributed algorithm is developed to solve the optimization problem using the subgradient method and Lagrangian relaxation.
Nonlinear Programming
Published in Albert G. Holzman, Mathematical Programming, 2020
The next concept to be introduced is the subgradient, which is related to the ordinary gradient in the case of differentiable convex functions and to the directional derivatives in the more general case. A subgradient of a convex function f at a point x ∈ Rn is a vector ξ ∈ Rn satisfying f(y) ≥ f(x) + ξT (y - x) (117) for every y ∈ Rn. For a convex function f it is possible that, at some point x, (i) no vector satisfying (117) exists, or (ii) there is a unique | satisfying (117), or (iii) there exist more than one such £. We denote by ∂f(x) the set of all subgradients of a convex function f at x. Some basic properties of subgradients can be summarized as follows: The set ∂f(x), also called the subdifferential of f, is a closed convex set. It contains a single vector ξ ∈ Rn if and only if the convex function f is differentiable (in the ordinary sense) at x and then ξ = ▽f(x). That is, ξj=∂f(x)∂xj,j=1,...,n(118)
Convex optimization of nonlinear inequality with higher order derivatives
Published in Applicable Analysis, 2023
Sevilay Demir Sağlam, Elimhan N. Mahmudov
Subdifferentiation theory is a fundamental instrument in the analysis of extremum problems. It is known that convex functions are not differentiable in general. These functions nevertheless have many useful differential properties, and one such is the universal existence of directional derivatives. Moreover, the notion of subgradient can be defined for convex function, and the set of subgradients is the subdifferential conception. It should be noted that the notion of subdifferential can be defined in different ways for different functional classes. The subdifferential of Mordukhovich is the main class of generalized differentials and plays a vital role in pure and applied analysis. Mordukhovich has recently published two volumes of the book [25], which provide key issues in modern variational analysis about these subdifferentials from theoretical and applied points of view.
Distributed sparse optimization for source localization over diffusion fields with cooperative spatiotemporal sensing
Published in Advanced Robotics, 2023
Naoki Hayashi, Masaaki Nagahara
For distributed sparse optimization, Patterson et al. proposed a distributed compressed sensing over a time-varying graph under the assumption of the global Lipschitz continuity of the cost function [41]. Ravazzi et al. investigated a distributed algorithm for and regularization problems for regular graphs [42]. In [43,44], the authors proposed distributed compressed sensing based on the restricted isometry property. Compared with these methods, the proposed CoopIST algorithm does not require the assumptions of the global Lipschitz continuity of the cost function and the regularity of the topology of the communication graph. While the subgradient-based algorithms can be applied to a broad class of distributed optimization problems and their implementation is relatively easy with less memory resource, the convergence rate of the subgradient-based algorithms is relatively slow.
Generalised gossip-based subgradient method for distributed optimisation
Published in International Journal of Control, 2019
Zhanhong Jiang, Kushal Mukherjee, Soumik Sarkar
Subgradient methods (Boyd, Xiao, & Mutapcic, 2003; Long, Wu, & Wang, 2015; Nedic & Bertsekas, 2008; Nedic & Ozdaglar, 2008, 2009; Yuan, Xu, & Zhao, 2011) have been broadly used to iteratively refine the estimates of the shared decision for each agent in a distributed manner. The difference between gradient and subgradient is that subgradient methods can be extendedly applied to optimisation problems with nondifferentiable cost functions. Furthermore, the necessary conditions for subgradient methods are relaxed compared to the gradient methods. In Johansson, Keviczky, Johansson, and Johansson (2008), a scheme combining consensus algorithm with subgradient method was presented for solving the convex optimisation problems. In Saber et al. (2007), a theoretical framework was established for consensus and cooperation in networked multi-agent systems, and in Nedic and Ozdaglar (2009), the distributed subgradient methods were developed for multi-agent optimisation. Moreover, in Nedic and Ozdaglar (2008), the authors also provided the analysis of estimates for convergence rate that the function value converged to the primal optimal value within an error at a rate in the number of subgradient iterations k. Some other methods such as approximated subgradient (Kiwiel, 2004), incremental subgradient (Nedic & Bertsekas, 2001), and stochastic subgradient methods (Ram, Nedich, & Veeravalli, 2008) were also proposed and developed for large-scale distributed optimisation problems.