Explore chapters and articles related to this topic
Device Software and Hardware Engineering Tools
Published in Chandrasekar Vuppalapati, Democratization of Artificial Intelligence for the Future of Humanity, 2021
The open-source Anaconda Individual Edition244 (formally Anaconda Distribution) is the easiest way to perform Python/R data science and machine learning on Linux, Windows, and Mac OS X. With over 19 million users worldwide, it is the industry standard for developing, testing, and training on a single machine, enabling individual data scientists to: Develop and train machine learning and deep learning models with scikit-learn, TensorFlow, and TheanoAnalyze data with scalability and performance with Dask, NumPy, pandas, and NumbaVisualize results with Matplotlib, Bokeh, Datashader, and Holoviews.
A congested schedule-based dynamic transit passenger flow estimator using stop count data
Published in Transportmetrica B: Transport Dynamics, 2023
Both the upper-level and lower-level thresholds and are set to 0.005. Using those tolerances, each lower-level iteration takes about 38 min to run; a single upper level takes about 6 h on average; the estimation algorithm converged after 75 h. Since the code was written in the notoriously slow Python, it is possible to improve its run time for online implementation. For example, the algorithm can be made faster if JIT compiling is applied using PyPy or Numba or rewritten in Cython or C++. Increasing the tolerance to 0.01, as noted to be sufficient by Hamdouch and Lawphongpanich (2008), should cut the run time alone by a significant chunk. We can further reduce computational time by using 5-minute intervals instead of 1-minute intervals, which should then make the model operable online.
State-space models for building control: how deep should you go?
Published in Journal of Building Performance Simulation, 2020
Baptiste Schubnel, Rafael E. Carrillo, Paolo Taddeo, Lluc Canal Casals, Jaume Salom, Yves Stauffer, Pierre-Jean Alet
For both architectures, the execution speed of the controllers was optimized. Jacobians in the SQP optimization were evaluated with Tensorflow 1.14 on the GPU for speed up, providing better time execution than numerical approximation with finite differences. Optimization coupled with the encoder–decoder models took around 4.5 min per control step. Analytic computations of the Jacobians of the linear state-space model with nonlinear outputs were carried out using multiprocessing python package Numba (Lam, Pitrou, and Seibert 2015) and using all the cores at disposal. Optimization coupled with the linear state-space models with nonlinear outputs took around 1.3 min per control step. The two methods, therefore, require a computation time in the same order of magnitude at each control step but the linear state-space model results in an optimization more than three times as fast as the encoder-decoder architecture.