Explore chapters and articles related to this topic
Blockchain Tokens and Cryptocurrencies
Published in Shaun Aghili, The Auditor's Guide to Blockchain Technology, 2023
Lavanya Vaddi, Jaskaran Singh Chana, Gurjot Singh Kocher
The replacement of sensitive information with tokens is referred to as tokenization. Any unique information like a primary account number (PAN), security code, expiration date, etc. of a card can be tokenized. To understand the concept of tokenization, transactions involving credit or debit cards are good examples. Whenever an individual swipes their debit or credit card, the PAN is tokenized by generating random alphanumeric IDs. In this way, there is a record of the transaction but without the real PAN number being stored in the system. This token is sent to the processor which in turn detokenizes the ID prior to authorizing the transaction. With respect to the validity of the token, it is valuable to a particular individual only; thus, it is of no use to any third parties.
Text Mining
Published in Brojo Kishore Mishra, Raghvendra Kumar, Natural Language Processing in Artificial Intelligence, 2020
S. Karthikeyan, Jeevanandam Jotheeswaran, B. Balamurugan, Jyotir Moy Chatterjee
Tokenization is the process of separating a character succession into pieces of words or phrases called tokens, and maybe in the meantime discard certain characters. Here the text is divided into words, expressions, images, or other significant components called tokens. The point of the tokenization is the investigation of the words in a sentence [1].
Automatic Type Detection of 311 Service Requests Based on Customer Provided Descriptions
Published in Applied Artificial Intelligence, 2022
An LSTM network is composed of an input layer (shown with xt in Figure 2), one or more hidden layers (the area marked as memory cell in Figure 2), and a multi-layer perceptron (MLP) with softmax output layer. The number of neurons in the input layer is equal to the number of features. Since we consider words as tokens, the number of features would be the size of the embedding vector for each word. The length of the embedding vector is set to 256. Creating an embedding vector requires that the input data be integer encoded, so that each word is represented by a unique integer. Encoding words into unique integers is performed using tokenization. Tokenization is the process of transforming a stream of characters into a stream of processing units called tokens, e.g. words (Jurafsky and Martin 2014). A one-dimensional spatial dropout rate of 10% is considered for the input layer. In regular dropout, individual elements are dropped out, but in spatial drop out, the entire embedding vector for a word is dropped out. This is equivalent to randomly removing 10% of the words from each document.
Mining of affective responses and affective intentions of products from unstructured text
Published in Journal of Engineering Design, 2018
W. M. Wang, Z. Li, Layne Liu, Z. G. Tian, Eric Tsui
In the affective analysis, a set of documents related to the target products is collected. In the process of single-document affective analysis, an unstructured text is first divided into sentences by sentence segmentation based on detection of punctuations by regular expression. We then use natural language processing tool to perform tokenization, stop-word removal, and part-of-speech (POS) tagging. Tokenization is a process of converting a text into tokens (i.e. words). Stop-word removal is a process of deleting common words, such as pronoun, article, etc. In this paper, a stop word list is used to filter out frequently used words. POS tagging is a process of assigning a POS to a word. Adjectives of each sentence are extracted and mapped according to the Kansei words collected in the previous process. If it matches, the sentence is classified to have the corresponding Kansei attribute. Since a Kansei word may belong to multiple Kansei attributes and a sentence may also consist of multiple Kansei words, a sentence may belong to multiple Kansei attributes.
Blockchain and the circular economy: potential tensions and critical reflections from practice
Published in Production Planning & Control, 2020
Mahtab Kouhizadeh, Qingyun Zhu, Joseph Sarkis
Blockchain technology can support financial incentivization mechanisms. Incentives can be structured as forms of bitcoin and other tokens and cryptocurrencies that have been the initial applications of blockchain technology (Chen 2018). To motivate people to adopt a certain behaviour, validate information, and improve performance, blockchains can incorporate rewards and encouragement programmes. The tokenization can reinforce various behaviours ranging from good practices to data gathering purposes that can be useful for data integrity, product management policies, and data acquisition. Tokens and incentives may be easily tradable and redeemable using accepted cryptocurrencies.