Explore chapters and articles related to this topic
A Deep-dive on Machine Learning for Cyber Security Use Cases
Published in Brij B. Gupta, Michael Sheng, Machine Learning for Computer and Cyber Security, 2019
R. Vinayakumar, K.P. Soman, Prabaharan Poornachandran, Vijay Krishna Menon
Most commercial systems existing in the market are based on blacklisting, regular expression and signature-matching methods [52]. All of these methods are reactive that suffer from delay in detecting the variants of existing malicious URLs and entirely new ones at that. By that time, a malicious author can get benefits from the end users. Both systems require a domain expert in which they constantly monitor the system and create signatures and push out updates to the customer. As a solving measure, over a decade researchers have proposed several machine learning-based URL detection systems [52]. These systems require domain-level expert knowledge for feature engineering and feature representation of security artifact type, e.g., URLs and finding the accuracy of machine learning models using those representations. In case of real-time system deployment, the machine learning-based URL systems encounter issues like large labeled training URLs corpus and analyzing the systems when patterns of URL keep continuously changing. Deep learning is a subdivision of machine learning [37] that is a prominent way to reduce the cost of training and operates on raw inputs instead of relying on manual feature engineering. Towards this, we propose deep-URL that takes raw URLs as input, character-level embedding, deep layers and feed-forward network with non-linear activation function to detect whether the URL is malicious or benign. For comparative study, the performances of the other machine learning classifiers are evaluated.
Deployment Stable Analysis Pattern
Published in M. E. Fayad, Stable Analysis Patterns for Software and Systems, 2017
Continuous delivery (CD) is a software engineering approach in which teams produce software in short cycles, ensuring that the software can be reliably released at any time. [1] It aims at building, testing, and releasing software faster and more frequently. The approach helps reduce the cost, time, and risk of delivering changes by allowing for more incremental updates to applications in production. A straightforward and repeatable deployment process is important for CD.System deployment is the deployment of a mechanical device, electrical system, computer program, etc., and its assembly or transformation from a packaged form to an operational working state. Deployment implies moving a product from a temporary or development state to a permanent or desired state. Select any context (scenario) of each of the above topics.Describe the scenario of the selected context.Draw a class diagram based on the deployment pattern to show the application of a selected context in a.Document a detailed and significant use case as shown in Case Study in Chapters 5 through 12.Create a sequence diagram of the created use case of c.
Analytical foundations for development of real-time supply chain capabilities
Published in International Journal of Production Research, 2019
Marcos Paulo Valadares de Oliveira, Robert Handfield
There are several limitations of this study. First, the study is exploratory in nature, and only scratches the surface of the many implications of real-time data capabilities. The intersection of real-time data with distributed computing, mobile devices, cognitive computing, and internet of things will continue to evolve and it is not clear at what level most organisations are in their deployment of these technologies. Second, our study relies on perceived measures of data governance, data quality, and other scales. The ability to track the specific nature of governance mechanisms across functional and enterprises in a global supply chain has yet to be explored in detail. Specific case studies of system deployment, and event analysis could be helpful in capturing the data used and data discarded by experts reacting to supply chain situations, that could be coded into machine-based learning algorithms. Third, additional insights are needed on how different forms of technologies for data capture such as IoT and distributed computing will be aggregated and clustered for decision-making, and the need for cross-industry data quality standards that span supply chain participants. Understanding human behaviour in the face of different types of real-time data will also be important, as the human-machine interaction will change going forward. These areas represent a number of challenges in the adoption of real-time data in supply chains that are ripe for inquiry (Pierce, Yonke, and Ahmed 2016). Finally, how organisations establish mechanisms or improved data governance and data quality will be fundamental to managers’ ability to trust the output of supply chain systems in making decisions. With so many systems across supply chains, bringing together data in a manner that produces quality reporting continues to be a fundamental challenge for many enterprises.