Explore chapters and articles related to this topic
A Brief History of Artificial Intelligence
Published in Ron Fulbright, Democratization of Expertise, 2020
From 2006–2011, IBM developed Watson, a question answering computing system initially developed to answer questions on the quiz show Jeopardy! To accomplish this goal, Watson was designed to apply advanced natural language processing, information retrieval, knowledge representation, automated reasoning, and machine learning technologies (Deshpande et al., 2017). In 2011, Watson participated in the Jeopardy! Challenge and defeated legendary champions Brad Rutter and Ken Jennings (Markoff, 2011). Since 2011, IBM has developed many applications on the Watson platform across multiple domains including healthcare, teaching assistants (Leopold, 2017), weather forecasting (Jancer, 2016), tax preparation (Moscaritolo, 2017), and a chatbot providing conversation for children’s toys (Takahashi, 2015).
Towards more integrated information management solutions for lifecycle asset management for integrated infrastructure projects
Published in Jaap Bakker, Dan M. Frangopol, Klaas van Breugel, Life-Cycle of Engineering Systems, 2017
Zhi Li, Sander van Nederveen, Rogier Wolfert
One of the most advanced expert system using artificial intelligence technology is the Watson system developed by the IBM Company. By storing voluminous data in its database, such as books, news and dictionaries, Watson can evaluate human questions and retrieve desired answers using Natural Language Processing technology. The IBM team has developed over 100 algorithms to process and answer questions within 3 seconds (Watson, 2015). Since August 2011, the Watson system began to be applied in the Medical industry and has got achievements in some areas. For example, in the use of cancer treatment researches, Watson recorded 42 medical journals, over 600 thousands treating cases and 2 million pages of text data. It can filter the requested treatment records and results over 1.5 million records from the last 10 years within only seconds. This supports the Doctors well by providing them treatment options automatically.
The State of the Art
Published in Chace Calum, Artificial Intelligence and the Two Singularities, 2018
Named after the company’s first CEO, IBM’s Watson is a question answering system that ingests questions phrased in natural language, and applies knowledge representation and automated reasoning to return answers, also in natural language. It was developed from 2005 to 2010 in order to win the Jeopardy quiz game, which it did the following January (see subsequent section), to great acclaim. Watson’s architecture comprises a collection of different systems and capabilities, including some employing ML techniques.16
Medical diagnosis and treatment is NP-complete
Published in Journal of Experimental & Theoretical Artificial Intelligence, 2021
Jeffrey. E. Arle, Kristen W. Carlson
The most successful general-purpose cognitive AI to date is Watson, which defeated the best human Jeopardy! Game-show players in 2011, and whose core program, DeepQA, has been applied to MDT (Ferrucci et al., 2013). DeepQA has taken a different approach to diagnosis than constructing decision trees or DNF clauses from Class I or curated data. DeepQA attempts to apply hundreds of algorithms to problems in a massive hybrid of all known AI techniques. This heuristic ‘society of mind’ approach (Minsky, 1988) may be driven by the underlying complexity of the target problem spaces. Similarly, cognitive scientists believe that the somewhat successful methods humans use on the TSP are ultimately heuristic, such as attempting to find a convex hull in the points, or avoiding crossing paths (MacGregor & Chu, 2011).
A survey on non-factoid question answering systems
Published in International Journal of Computers and Applications, 2022
Manvi Breja, Sanjay Kumar Jain
Questions are broadly classified into two categories (1) Factoid Questions which are fact-based and easier to respond. They usually begin with what, when, who, and where (2) Non-factoid questions which are comparatively complex and difficult to answer. They usually take the form of why, how and what-if type hypothetical questions. Answers to factoid questions are not ambiguous and responded in single sentences. For example, what-type questions seek details of subjects in questions; when-type questions address temporal information related to times in the past, present or future; who-type questions extract information about entities/persons; and where-type questions require the loci of subjects in questions. But, non-factoid questions require detailed responses to satisfy the requirements of users and answers of such questions are sometimes ambiguous and subjective in nature ranging from a sentence to a document. Answering English why-questions is ambiguous, sometimes involving reason and purpose interpretations. For example, the question ‘Why did she resign?’ can be answered as (1) to earn more money next year and (2) Because she got a pay cut. Thus the first answer represents purpose interpretations reflecting future event and the second answer represents reason interpretations reflecting past event. Let’s take another question ‘Why did she wake up so early?’ and possible answers are say (1) To see the sunrise reflecting purpose (motivation) interpretations or (2) Because she needed some comforting reflecting reason (cause) interpretations. Advanced NLP techniques, such as pragmatic and discourse analysis [1,2], textual entailment [3] and lexical semantic modeling [4,5], are used to answer non-factoid questions. IBM’s project DeepQA [6] develops a supercomputer, Watson, that uses the concepts of deep content analysis, natural language processing, machine learning, and artificial intelligence and answers questions asked in natural language. Digital Assistants, such as Google Assistant, Alexa, Siri, and Cortana, are used to play songs, videos and movies, investigate the weather status, predict weather forecast, set alarms, make phone calls to even searching the Web for answering trivial questions [7]. Despite the development of the advanced QASs, such as IBM’s Watson [6] and Facebook’s DrQA [8], the challenge of answering question based on the user’s context is open to research which is a pre-requisite for accurately answering non-factoid questions.