Jump to content

Knowledge representation and reasoning

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Samuel Sefuka (talk | contribs) at 20:27, 9 September 2023 (History: Added links). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Knowledge representation and reasoning (KRR, KR&R, KR²) is the field of artificial intelligence (AI) dedicated to representing information about the world in a form that a computer system can use to solve complex tasks such as diagnosing a medical condition or having a dialog in a natural language. Knowledge representation incorporates findings from psychology[1] about how humans solve problems and represent knowledge in order to design formalisms that will make complex systems easier to design and build. Knowledge representation and reasoning also incorporates findings from logic to automate various kinds of reasoning, such as the application of rules or the relations of sets and subsets.

Examples of knowledge representation formalisms include semantic nets, systems architecture, frames, rules, and ontologies. Examples of automated reasoning engines include inference engines, theorem provers, and classifiers.

Twelve year-old Emily’s faith in her parents was seriously injured with fights and conflicts that used to happen every day between her parents , which also led to an accident that killed her sister as she was trying to stop the fight between their parents , and later on Emily’s father attempted to commit suicide as Emily’s mother repeatedly accused the husband of been responsible for Emily's sister's death. So that chance Emily moves into the basement, she sets aside all appropriate behavior to suit in with drugs, boys, bad friends, and night clubs.

Eventually, she will stop: it’s time to pull the plug on faith of this new Art of Conflict Resolutions and amend the broken hearts.

Do you want to read more?. Download this 51,100 words book here>> https://currentaffairsnonfictionbooks.simdif.comStarter

Below are just few out of many chapters outlined:


Chapter 4

PASSWORDS FOR RESOLVING CONFLICTS:

This chapter delves into the essential aspect of resolving conflicts through effective communication and particular skill sets. It emphasizes the significance of in de-escalating conflicts and finding mutually beneficial resolutions.

 It also offers strategies and techniques for active listening, empathy, and assertive communication are explored, not forgetting practical tools for readers to apply in their personal and professional lives.


Chapter 5

THE GREATEST OPPONENT IS YOUR MIND:

As our thoughts always gets in our way of thinking we tend to start following what we currently see and hear, not what we long established or importantly required to us. 


This chapter gives the insight of what makes many people to lose focus on what is really important and after that they end up losing almost everything they acquired and should have acquired.


 Chapter 6


COMPETITION WITHIN OUR SELVES:

Everyone has reasons to aspirations that they hope to achieve during times of conflicts .These reasons vary greatly from person to person, ranging from personal goals, contract accomplishments, or even looking for adventures.

 However when it comes to conflicts one must pay attention to understand what his opponent truly requires to archive . this chapter offers extensive knowledge of understanding hidden ground to your opponent desire .


Chapter 7 LET’S TABLE IT: Everyone has reasons to aspirations that they hope to achieve during times of conflicts .These reasons vary greatly from person to person, ranging from personal goals, contract accomplishments, or even looking for adventures. . https://currentaffairsnonfictionbooks.simdif.comStarter Another area of knowledge based representation of conflicts research is that problem of long lasting conflicts which red Samuel sefuka into research and he came back with best results of it , which he also recorded in his book titled : Conflicts & Resolutions [ Undisclosed tools of resolutions]

One of the first realizations learned from personal life experience is  that humans regularly draw on an extensive foundation of knowledge about the real world that we simply take for granted but Samuel though it is essential to represent this kind of knowledge to you his friends.

ABOUT THE AUTHOR: Hi, am Samuel Sefuka, and I am thrilled to be the author of this book titled "Conflicts and Resolutions." As an avid reader and writer, I have always been fascinated by the complexities of human relationships and the various conflicts that arise in our lives.

 This curiosity led me to explore the depths of conflicts and it's resolutions as well as the potential for growth and understanding that comes with it. Through my personal experiences and extensive research, I have gained valuable insights into the intricacies of resolving conflicts both on an individual and societal level.

My aim in writing this book is to provide readers with practical strategies, thought-provoking anecdotes, and actionable steps to navigate conflicts and foster healthy resolutions.

By sharing my knowledge and passion for this subject matter, I hope to empower readers to embrace conflicts as opportunities for growth and create a more harmonious world ,and not for condemning others. Join me on this transformative journey as we delve into the complexities of conflicts and the art of resolutions.

https://currentaffairsnonfictionbooks.simdif.comStarter

Overview

Knowledge-representation is a field of artificial intelligence that focuses on designing computer representations that capture information about the world that can be used for solving complex problems.

The justification for knowledge representation is that conventional procedural code is not the best formalism to use to solve complex problems. Knowledge representation makes complex software easier to define and maintain than procedural code and can be used in expert systems.

For example, talking to experts in terms of business rules rather than code lessens the semantic gap between users and developers and makes development of complex systems more practical.

Knowledge representation goes hand in hand with automated reasoning because one of the main purposes of explicitly representing knowledge is to be able to reason about that knowledge, to make inferences, assert new knowledge, etc. Virtually all knowledge representation languages have a reasoning or inference engine as part of the system.[2]

A key trade-off in the design of a knowledge representation formalism is that between expressivity and practicality. The ultimate knowledge representation formalism in terms of expressive power and compactness is First Order Logic (FOL). There is no more powerful formalism than that used by mathematicians to define general propositions about the world. However, FOL has two drawbacks as a knowledge representation formalism: ease of use and practicality of implementation. First order logic can be intimidating even for many software developers. Languages that do not have the complete formal power of FOL can still provide close to the same expressive power with a user interface that is more practical for the average developer to understand. The issue of practicality of implementation is that FOL in some ways is too expressive. With FOL it is possible to create statements (e.g. quantification over infinite sets) that would cause a system to never terminate if it attempted to verify them.

Thus, a subset of FOL can be both easier to use and more practical to implement. This was a driving motivation behind rule-based expert systems. IF-THEN rules provide a subset of FOL but a very useful one that is also very intuitive. The history of most of the early AI knowledge representation formalisms; from databases to semantic nets to theorem provers and production systems can be viewed as various design decisions on whether to emphasize expressive power or computability and efficiency.[3]

In a key 1993 paper on the topic, Randall Davis of MIT outlined five distinct roles to analyze a knowledge representation framework:[4]

  • "A knowledge representation (KR) is most fundamentally a surrogate, a substitute for the thing itself, used to enable an entity to determine consequences by thinking rather than acting," [4] i.e., "by reasoning about the world rather than taking action in it."[4]
  • "It is a set of ontological commitments",[4] i.e., "an answer to the question: In what terms should I think about the world?" [4]
  • "It is a fragmentary theory of intelligent reasoning, expressed in terms of three components: (i) the representation's fundamental conception of intelligent reasoning; (ii) the set of inferences the representation sanctions; and (iii) the set of inferences it recommends."[4]
  • "It is a medium for pragmatically efficient computation",[4] i.e., "the computational environment in which thinking is accomplished. One contribution to this pragmatic efficiency is supplied by the guidance a representation provides for organizing information" [4] so as "to facilitate making the recommended inferences."[4]
  • "It is a medium of human expression",[4] i.e., "a language in which we say things about the world."[4]

Knowledge representation and reasoning are a key enabling technology for the Semantic Web. Languages based on the Frame model with automatic classification provide a layer of semantics on top of the existing Internet. Rather than searching via text strings as is typical today, it will be possible to define logical queries and find pages that map to those queries.[5] The automated reasoning component in these systems is an engine known as the classifier. Classifiers focus on the subsumption relations in a knowledge base rather than rules. A classifier can infer new classes and dynamically change the ontology as new information becomes available. This capability is ideal for the ever-changing and evolving information space of the Internet.[6]

The Semantic Web integrates concepts from knowledge representation and reasoning with markup languages based on XML. The Resource Description Framework (RDF) provides the basic capabilities to define knowledge-based objects on the Internet with basic features such as Is-A relations and object properties. The Web Ontology Language (OWL) adds additional semantics and integrates with automatic classification reasoners.[7]

Characteristics

In 1985, Ron Brachman categorized the core issues for knowledge representation as follows:[8]

  • Primitives. What is the underlying framework used to represent knowledge? Semantic networks were one of the first knowledge representation primitives. Also, data structures and algorithms for general fast search. In this area, there is a strong overlap with research in data structures and algorithms in computer science. In early systems, the Lisp programming language, which was modeled after the lambda calculus, was often used as a form of functional knowledge representation. Frames and Rules were the next kind of primitive. Frame languages had various mechanisms for expressing and enforcing constraints on frame data. All data in frames are stored in slots. Slots are analogous to relations in entity-relation modeling and to object properties in object-oriented modeling. Another technique for primitives is to define languages that are modeled after First Order Logic (FOL). The most well known example is Prolog, but there are also many special-purpose theorem-proving environments. These environments can validate logical models and can deduce new theories from existing models. Essentially they automate the process a logician would go through in analyzing a model. Theorem-proving technology had some specific practical applications in the areas of software engineering. For example, it is possible to prove that a software program rigidly adheres to a formal logical specification.
  • Meta-representation. This is also known as the issue of reflection in computer science. It refers to the capability of a formalism to have access to information about its own state. An example would be the meta-object protocol in Smalltalk and CLOS that gives developers run time access to the class objects and enables them to dynamically redefine the structure of the knowledge base even at run time. Meta-representation means the knowledge representation language is itself expressed in that language. For example, in most Frame based environments all frames would be instances of a frame class. That class object can be inspected at run time, so that the object can understand and even change its internal structure or the structure of other parts of the model. In rule-based environments, the rules were also usually instances of rule classes. Part of the meta protocol for rules were the meta rules that prioritized rule firing.
  • Incompleteness. Traditional logic requires additional axioms and constraints to deal with the real world as opposed to the world of mathematics. Also, it is often useful to associate degrees of confidence with a statement. I.e., not simply say "Socrates is Human" but rather "Socrates is Human with confidence 50%". This was one of the early innovations from expert systems research which migrated to some commercial tools, the ability to associate certainty factors with rules and conclusions. Later research in this area is known as fuzzy logic.[9]
  • Definitions and universals vs. facts and defaults. Universals are general statements about the world such as "All humans are mortal". Facts are specific examples of universals such as "Socrates is a human and therefore mortal". In logical terms definitions and universals are about universal quantification while facts and defaults are about existential quantifications. All forms of knowledge representation must deal with this aspect and most do so with some variant of set theory, modeling universals as sets and subsets and definitions as elements in those sets.
  • Non-monotonic reasoning. Non-monotonic reasoning allows various kinds of hypothetical reasoning. The system associates facts asserted with the rules and facts used to justify them and as those facts change updates the dependent knowledge as well. In rule based systems this capability is known as a truth maintenance system.[10]
  • Expressive adequacy. The standard that Brachman and most AI researchers use to measure expressive adequacy is usually First Order Logic (FOL). Theoretical limitations mean that a full implementation of FOL is not practical. Researchers should be clear about how expressive (how much of full FOL expressive power) they intend their representation to be.[11]
  • Reasoning efficiency. This refers to the run time efficiency of the system. The ability of the knowledge base to be updated and the reasoner to develop new inferences in a reasonable period of time. In some ways, this is the flip side of expressive adequacy. In general, the more powerful a representation, the more it has expressive adequacy, the less efficient its automated reasoning engine will be. Efficiency was often an issue, especially for early applications of knowledge representation technology. They were usually implemented in interpreted environments such as Lisp, which were slow compared to more traditional platforms of the time.

Ontology engineering

In the early years of knowledge-based systems the knowledge-bases were fairly small. The knowledge-bases that were meant to actually solve real problems rather than do proof of concept demonstrations needed to focus on well defined problems. So for example, not just medical diagnosis as a whole topic, but medical diagnosis of certain kinds of diseases.

As knowledge-based technology scaled up, the need for larger knowledge bases and for modular knowledge bases that could communicate and integrate with each other became apparent. This gave rise to the discipline of ontology engineering, designing and building large knowledge bases that could be used by multiple projects. One of the leading research projects in this area was the Cyc project. Cyc was an attempt to build a huge encyclopedic knowledge base that would contain not just expert knowledge but common-sense knowledge. In designing an artificial intelligence agent, it was soon realized that representing common-sense knowledge, knowledge that humans simply take for granted, was essential to make an AI that could interact with humans using natural language. Cyc was meant to address this problem. The language they defined was known as CycL.

After CycL, a number of ontology languages have been developed. Most are declarative languages, and are either frame languages, or are based on first-order logic. Modularity—the ability to define boundaries around specific domains and problem spaces—is essential for these languages because as stated by Tom Gruber, "Every ontology is a treaty- a social agreement among people with common motive in sharing." There are always many competing and differing views that make any general-purpose ontology impossible. A general-purpose ontology would have to be applicable in any domain and different areas of knowledge need to be unified.[12]

There is a long history of work attempting to build ontologies for a variety of task domains, e.g., an ontology for liquids,[13] the lumped element model widely used in representing electronic circuits (e.g.,[14]), as well as ontologies for time, belief, and even programming itself. Each of these offers a way to see some part of the world.

The lumped element model, for instance, suggests that we think of circuits in terms of components with connections between them, with signals flowing instantaneously along the connections. This is a useful view, but not the only possible one. A different ontology arises if we need to attend to the electrodynamics in the device: Here signals propagate at finite speed and an object (like a resistor) that was previously viewed as a single component with an I/O behavior may now have to be thought of as an extended medium through which an electromagnetic wave flows.

Ontologies can of course be written down in a wide variety of languages and notations (e.g., logic, LISP, etc.); the essential information is not the form of that language but the content, i.e., the set of concepts offered as a way of thinking about the world. Simply put, the important part is notions like connections and components, not the choice between writing them as predicates or LISP constructs.

The commitment made selecting one or another ontology can produce a sharply different view of the task at hand. Consider the difference that arises in selecting the lumped element view of a circuit rather than the electrodynamic view of the same device. As a second example, medical diagnosis viewed in terms of rules (e.g., MYCIN) looks substantially different from the same task viewed in terms of frames (e.g., INTERNIST). Where MYCIN sees the medical world as made up of empirical associations connecting symptom to disease, INTERNIST sees a set of prototypes, in particular prototypical diseases, to be matched against the case at hand.

See also

References

  1. ^ Schank, Roger; Abelson, Robert (1977). Scripts, Plans, Goals, and Understanding: An Inquiry Into Human Knowledge Structures. Lawrence Erlbaum Associates, Inc.
  2. ^ Hayes-Roth, Frederick; Waterman, Donald; Lenat, Douglas (1983). Building Expert Systems. Addison-Wesley. pp. 6–7. ISBN 978-0-201-10686-2.
  3. ^ Levesque, Hector; Brachman, Ronald (1985). "A Fundamental Tradeoff in Knowledge Representation and Reasoning". In Ronald Brachman and Hector J. Levesque (ed.). Readings in Knowledge Representation. Morgan Kaufmann. p. 49. ISBN 978-0-934613-01-9. The good news in reducing KR service to theorem proving is that we now have a very clear, very specific notion of what the KR system should do; the bad new is that it is also clear that the services can not be provided... deciding whether or not a sentence in FOL is a theorem... is unsolvable.
  4. ^ a b c d e f g h i j k Davis, Randall; Shrobe, Howard; Szolovits, Peter (Spring 1993). "What Is a Knowledge Representation?". AI Magazine. 14 (1): 17–33. Archived from the original on 2012-04-06. Retrieved 2011-03-23.
  5. ^ Cite error: The named reference Berners-Lee 34–43 was invoked but never defined (see the help page).
  6. ^ Macgregor, Robert (August 13, 1999). "Retrospective on Loom". isi.edu. Information Sciences Institute. Archived from the original on 25 October 2013. Retrieved 10 December 2013.
  7. ^ Knublauch, Holger; Oberle, Daniel; Tetlow, Phil; Wallace, Evan (2006-03-09). "A Semantic Web Primer for Object-Oriented Software Developers". W3C. Archived from the original on 2018-01-06. Retrieved 2008-07-30.
  8. ^ Brachman, Ron (1985). "Introduction". In Ronald Brachman and Hector J. Levesque (ed.). Readings in Knowledge Representation. Morgan Kaufmann. pp. XVI–XVII. ISBN 978-0-934613-01-9.
  9. ^ Bih, Joseph (2006). "Paradigm Shift: An Introduction to Fuzzy Logic" (PDF). IEEE Potentials. 25: 6–21. doi:10.1109/MP.2006.1635021. S2CID 15451765. Archived (PDF) from the original on 12 June 2014. Retrieved 24 December 2013.
  10. ^ Zlatarva, Nellie (1992). "Truth Maintenance Systems and their Application for Verifying Expert System Knowledge Bases". Artificial Intelligence Review. 6: 67–110. doi:10.1007/bf00155580. S2CID 24696160.
  11. ^ Levesque, Hector; Brachman, Ronald (1985). "A Fundamental Tradeoff in Knowledge Representation and Reasoning". In Ronald Brachman and Hector J. Levesque (ed.). Readings in Knowledge Representation. Morgan Kaufmann. pp. 41–70. ISBN 978-0-934613-01-9.
  12. ^ Russell, Stuart J.; Norvig, Peter (2010), Artificial Intelligence: A Modern Approach (3rd ed.), Upper Saddle River, New Jersey: Prentice Hall, ISBN 0-13-604259-7, p. 437-439
  13. ^ Hayes P, Naive physics I: Ontology for liquids. University of Essex report, 1978, Essex, UK.
  14. ^ Davis R, Shrobe H E, Representing Structure and Behavior of Digital Hardware, IEEE Computer, Special Issue on Knowledge Representation, 16(10):75-82.

Further reading