Explore chapters and articles related to this topic
Overview of Basic Numerical Methods and Parallel Computing
Published in Sourav Banerjee, Cara A.C. Leckey, Computational Nondestructive Evaluation Handbook, 2020
Sourav Banerjee, Cara A.C. Leckey
OpenMP, which stands for Open Multi-Processing, is parallel computing API that can transform the serial program into parallel without the need to rewrite it completely, taking advantage of multicore hardware [18]. If one has an existing unparalleled computer code in C/C++, it is obvious that rewriting the whole program will encounter numerous difficulties in terms of both cost and correctness. Hence, utilizing OpenMP is the solution to this problem. OpenMP is proficient in allowing the gradual conversion of sequential programs to parallel ones. In OpenMP, the compiler is in charge of processing all the parallelizing specifics such as spawning, initiating, and terminating threads. The OpenMP API consists of compiler directives, library routines, and environmental variables. The execution outline of OpenMP supports the Globally Sequential, Locally Parallel (GSLP) structure. Essentially, the compiler transforms the computation intensive portion of a sequential program into parallel. For example, implementation of a CNDE modeling problem using DPSM, discussed in Chapter 7, used C/C++ compiler, which comes with OpenMP support. The instructions to the compiler come in the form of #pragma preprocessor directives. Pragma directives allow a programmer to access compiler-specific preprocessor extensions [19]. After the implementation of OpenMP, with DPSM, a speedup of less than 5 times was achieved. However, higher speedup was necessary and other parallel coding opportunities were explored. Thus, next approach was to proceed to implement GPU parallel computing using CUDA.
Variability-intensive Software Systems
Published in Ivan Mistrik, Matthias Galster, Bruce R. Maxim, Software Engineering for Variability Intensive Systems, 2019
Matthias Galster, Ivan Mistrik, Bruce Maxim
Conditional compilation is supported by preprocessors in languages like C. Preprocessor directives (e.g., #ifdef in C) enable programmers to include or exclude parts of the code by providing a corresponding configuration [70]. As argued by Hunsen and colleagues, preprocessors in C are widely used to implement highly configurable systems in industrial and open-source software systems [70]. For example, the Linux kernel uses the preprocessor to allow developers to choose among 12,000 distinct options at build time. Approaches have been proposed to check for bugs in preprocessor directives. For example, the TypeChef infrastructure [71] supports parsing and type-checking C code with #ifdef variability, targeted at finding bugs in highly configurable systems such as the Linux kernel.
Introduction
Published in Randall L. Eubank, Ana Kupresanin, Statistical Computing in C++ and R, 2011
Randall L. Eubank, Ana Kupresanin
In looking back at the header file and associated cpp file for class Power, one will see that the namespace directive was employed only in the cpp file. This was done on purpose and it is, in general, not advisable to place namespace directives in header files. The reason for this is that the preprocessor essentially inserts the content of a header file into any source code file (which could even be another header file) that imports it using an #include statement. This means that a namespace designation in a header file can propagate unintentionally if the header file is being used several times in a multiple source code file project. The result can be name clashes that are hard to diagnose. Therefore, it is best to use explicit namespace qualifications via the scope resolution operator in header files and restrict the use of namespace directives to the cpp files for a class. We will demonstrate this approach in the next section.
A GCC-based checker for compliance with MISRA-C's single-translation-unit rules
Published in Connection Science, 2023
Chih-Yuan Chen, Yung-An Fang, Guan-Ren Wang, Peng-Sheng Chen
The GCC C compiler cc1 consists of three parts: a front end, a middle end, and a back end. Figure 2 outlines the structure of the GCC architecture. First, a C program is processed by C preprocessor cpp, an independent executable. cpp handles macro expansion and header files. The output from cpp is processed in turn via these three parts during compilation. The front end reads the input, and parses and converts it into a standard AST. The AST is then converted into a unified format GENERIC. GENERIC is a language-independent representation. The middle end lowers the GENERIC into another form called GIMPLE, a three-address representation derived from GENERIC. Then GIMPLE is transformed into a static single assignment (SSA) form called GIMPLE SSA. Finally, compilation reaches the back end, where the GIMPLE SSA is transformed into the register-transfer language (RTL) representation, like a pseudo assembly code. The RTL representation is optimised, and the proper target assembly code is generated according to the RTL and target hardware-related description. The red boxes in Figure 2 are the parts related to our work.
Efficient computation of derivatives for solving optimization problems in R and Python using SWIG-generated interfaces to ADOL-C†
Published in Optimization Methods and Software, 2018
K. Kulshreshtha, S.H.K. Narayanan, J. Bessac, K. MacIntyre
One caveat in using the %include macro is that unlike the C/C++ preprocessor it will read only the file named in the macro and will not recursively read any other files that are #included inside this file. This feature is to prevent extraneous code from being generated that wraps any system APIs that were #included in the C++ header file of a library. This poses a challenge for processing ADOL-C via SWIG, however, because the convenient header file <adolc/adolc.h> contains a large number of #include directives for subsidiary headers, as well as required system headers. Applying the C++ preprocessor directly, however, results in a file containing all the APIs from all the system headers as well as all the subsidiary headers. We do not need to wrap the system APIs for the target language, only the ADOL-C API. We therefore wrote a Python script that first excludes all the system headers from the ADOL-C headers and then runs the C++ preprocessor on it to temporarily produce a flat single header containing all ADOL-C APIs but no system APIs. This file is then %included and processed with SWIG, and the generated sources are compiled.