Explore chapters and articles related to this topic
Distributed and Parallel Computing
Published in Sunilkumar Manvi, Gopal K. Shyam, Cloud Computing, 2021
Sunilkumar Manvi, Gopal K. Shyam
The beauty of remoting technologies is that there are many to choose from. Java offers a huge variety of possibilities and technologies to implement distributed applications. The selection of a remoting technology already significantly influences the architecture and also performance and scalability of an application. The “oldest” and assumedly most widely used remoting protocol is Remote Method Invocation (RMI) (Fig. 2.4). RMI is the standard protocol for Java Enterprise Edition (JEE) applications. As the name implies, it is designed for invoking methods of objects hosted in other Java Virtual Machines (JVMs). Objects are exposed at the server side and can then be invoked from clients via proxies. The same server object is used by multiple threads. The thread pool is managed by the RMI infrastructure.
Development and application of creep test remote-monitoring system
Published in Yigang He, Xue Qing, Automatic Control, Mechatronics and Industrial Engineering, 2019
Z.L. An, W.X. Wang, Z.Y. Wang, T.H. Chen, N. Wang
In the C/S framework, the server is configured with a custom TCP port greater than 1024, and the Socket Server is programming in java for the client connection to achieve data transmission and timely push of alarm signals. The framework includes three modules as follows: Data processing module: used to complete data processing and thread control. The task thread pool is used to provide threads for each connected client. When idle occurs in the collection, the queue task is started, otherwise it waits.Processing module: used to complete client connection and XML file encapsulation and parsing. Connects to the client by comparing the sockets. When connecting, it uses the socket connection pool method to ensure the connection of multiple clients by initializing several long connections and adding identification bits.Communication module. Through the data transmission channel, the data information is sent to the server side in an XML data format in an asynchronous manner to implement data interaction.
Portal Performance Engineering
Published in Shailesh Kumar Shivakumar, and User Experience Platforms, 2015
Some of the key server parameters for a Java-based portal server are as follows: JVM heap size: The minimum and maximum heap size can be configured based on the loadand vendor-recommended values.Thread pool settings: We can adjust the maximum and minimum thread pool size.Connection pool settings: We can adjust the maximum and minimum connection pool size.Cache settings: We can fine-tune the native portal server cache framework settings. This includes cache memory settings, clustered cache settings, cache replication settings, cache invalidation algorithm, disk offoad options etc.Cluster settings: We can configure the cluster configuration, cluster synchronization, and clustered caching options in this settings.
A multi-skilled workforce optimisation in maintenance logistics networks by multi-thread simulated annealing algorithms
Published in International Journal of Production Research, 2021
Hasan Hüseyin Turan, Fuat Kosanoglu, Mahir Atmis
The pseudo code for the MTSA algorithm can be found in Algorithm 1. First of all, random initial solutions (i.e. cross-training policies) are generated. We set to five in our computational experiments. Using initial solutions, minimum total cost value () and the cross-training policy that produces it is determined. These values are then used to find better solutions. The MTSA algorithm benefits from threads while searching for better solutions. Threads are sub-programs generated by the main program to execute main program's specific tasks simultaneously. A task in our case is given a cross-training policy , searching neighbours of to find a better solution. In our model, we chose thread pool pattern, where there is a queue of tasks and a number of threads executing these tasks. When a thread finishes a task, it pops another task from the queue and executes it until no task is available in the queue. The optimum number of threads to use is the number of cores the computer has. There are tasks and nCores threads (the number of cores in the system). Threads concurrently search for better solutions starting from points. After searching on all points are completed, the results of the searches are checked. If a better solution is found, then all points are set to that better solution. Otherwise, those points are used to start the search again.