Mind Network Research Paper - Homework for you

Homework for you

Mind Network Research Paper

Rating: 4.3/5.0 (22 Votes)

Category: Research Paper

Description

Metaphors Of The Mind Essay, Research Paper

Metaphors Of The Mind Essay, Research Paper

Sam Vaknin’s Psychology, Philosophy, Economics and Foreign Affairs Web SitesThe brain (and, by implication, the Mind) has been compared to the latest technological innovation in every generation. The computer metaphor is now in vogue. Computer hardware metaphors were replaced by software metaphors and, lately, by (neuronal) network metaphors. Such attempts to understand by comparison are common in every field of human knowledge. Architects and mathematicians have lately come up with the structural concept of “tensegrity” to explain the phenomenon of life. The tendency of humans to see patterns and structures everywhere (even where there are none) is well documented and probably has its survival value added.

Another trend is to discount these metaphors as erroneous, irrelevant, or deceptively misleading. Yet, these metaphors are generated by the same Mind that is to be described by them. The entities or processes to which the brain is compared are also “brain-children”, the results of “brain-storming”, conceived by “minds”. What is a computer, a software application, a communications network if not a (material) representation of cerebral events?

In other words, a necessary and sufficient connection must exist between ANYTHING created by humans and the minds of humans. Even a gas pump must have a “mind-correlate”. It is also conceivable that representations of the “non-human” parts of the Universe exist in our minds, whether a-priori (not deriving from experience) or a-posteriori (dependent upon experience). This “correlation”, “emulation”, “simulation”, “representation” (in short. close connection) between the “excretions”, “output”, “spin-offs”, “products” of the human mind and the human mind itself – is a key to understanding it.

This claim is an instance of a much broader category of claims: that we can learn about the artist by his art, about a creator by his creation, and generally: about the origin by any of its derivatives, inheritors, successors, products and similes.

This general contention is especially strong when the origin and the product share the same nature. If the origin is human (father) and the product is human (child) – there is an enormous amount of data to be safely and certainly derived from the product and these data will surely apply to the origin. The closer the origin and the product – the more we can learn about the origin. The computer is a “thinking machine” (however limited, simulated, recursive and mechanical). Similarly, the brain is a “thinking machine” (admittedly much more agile, versatile, non-linear, maybe even qualitatively different). Whatever the disparity between the two (and there is bound to be a large one), they must be closely related to one another. This close relatedness is by virtue of two facts: (1) They are both “thinking machines” and, much more important: (2) the latter is the product of the former. Thus, the computer metaphor is unusually strong. Should an organic computer come to be, the metaphor will strengthen. Should a quantum computer be realized – some aspects of the metaphor will, undoubtedly, be enhanced.

By the way, the converse hypothesis is not necessarily true: that by knowing the origin we can anticipate the products. There are too many free variables here. The existence of a product “collapses” our set of probabilities and increases our knowledge – to use Bohr’s metaphor.

The origin exists as a “wave function”: a series of potentialities with attached probabilities, the potentials being the logically and physically possible products.

But what can be learned about the origin by a crude comparison to the product? Mostly traits and attributes related to structure and to function. These are easily observable. Is this sufficient? Can we learn anything about the “true nature” of the origin? The answer is negative. It is negative in general: we can not aspire or hope to know anything about the “true nature” of anything. This is the realm of metaphysics, not of physics. Quantum Mechanics provides an astonishingly accurate description of micro-processes and of the Universe without saying anything meaningful about both. Modern physics strives to predict rightly – rather to expound upon this or that worldview. It describes – it does not explain. Where interpretations are offered (e.g. the Copenhagen interpretation of Quantum Mechanics) they run into insurmountable obstacles and philosophical snags. Thus, modern science is metaphorical and uses a myriad of metaphors (particles and waves, to mention but two prominent ones). Metaphors have proven themselves to be useful scientific tools in the “thinking scientist’s” kit.

Moreover, a metaphor can develop and its development closely traces the developmental phases of the origin. Take the computer software metaphor as an example:

At the dawn of computing the composition of software applications was serial, in machine language and with strict separation of data (called: “structures”) and instruction code (called: “functions” or “procedures”). This was really a “biological” phase akin to the development of the embryonic brain (mind). The machine language closely matched the physical wiring of the hardware. In the case of biology, the instructions (DNA) are also insulated from the data (amino acids and other life substances). Databases were handled on a “listing” basis (“flat file”), were serial and had no intrinsic relationship to each other (an alphabetic order is an extrinsic order, imposed from the outside and existing only in the mind of the “imposer”). They were in the state of a substrate, ready to be acted upon. Only when “mixed” in the computer (as the application was run) did functions operate on structures.

This was, quite expectedly, followed by the “relational” organization of data (a primitive example of which is the spreadsheet). Data items were related to each other through mathematical formulas. This is the equivalent of the wiring of the brain, as the pregnancy progresses.

The latest evolutionary phase has been the OOPS (Object Oriented Programming Systems). Objects are modules which contain BOTH data and instructions in self contained units. The user is acquainted with the FUNCTIONS performed by these objects – but not with their STRUCTURE, INTERNAL COMMUNICATIONS AND PROCESSES. Objects, in other words, are “black boxes” (am engineering term). The programmer is unable to tell HOW the object does what it does, how does external, useful function arise from internal, hidden ones. Objects are epiphenomenal, emergent, phase transient. In short: much closer to reality as we came to describe it in modern physics.

Communication can be established among these black boxes – but it is not the communication (its speed or efficacy) that determine the overall efficiency of the system. It is the hierarchical and at the same time fuzzy organization of the objects which does the trick. Objects are organized in classes which define their (actualized and potential) properties. The object’s behaviour (what it does and to what it is allowed to react) is defined by its very belonging to the class. Moreover, a principle of “inheritance” is in operation: objects can be organized in new (sub) classes, inherit all the definitions and characteristics of the original class plus new properties which distinguish it from its origin. In a way, these newly emergent classes are the products and the classes that they derived from are the origin. This process so closely resembles natural phenomena that it lends additional credibility to the metaphor.

Thus, classes can be used as building blocks. Their permutations define the set of all soluble problems. It can be proven that Turing Machines are a private instance of a general, much stronger, class theory (back to the Principia Mathematica). The integration of hardware (computer, brain) and software (computer applications, mind) is done through “framework applications” which adjust the two elements structurally and functionally. An equivalent must be found in the brain (a priori categories, a collective unconscious?).

We use the term evolution because one phase replaces another. Relational databases cannot be integrated with object oriented ones, for instance. To run Java applets, a “virtual machine” needs to be embedded in the operating system. These phases closely resemble the development of the brain-mind couplet.

When is a metaphor a good metaphor? When it teaches us something about the origin that could not have been gleaned without it. That it must possess some structural and functional resemblance we have already established. But this is not enough. This is merely the “quantitative, observational” aspect of the metaphor. There is also a qualitative one: it must be instructive, revealing, insightful, aesthetic, parsimonious – in short, it must establish a theory and the resulting hypotheses. A metaphor is a theory which is the result of given logical and aesthetic rules. It must be subjected to the rigorous testing demanded by science before it can be judged to be a reliable one.

If the software metaphor is correct, the brain must contain the following features:

Parity checks through back propagation of signals – the electrochemical signal in a neurone must move back (to its origin) and forward, simultaneously in order to establish a feedback parity loop

The neurone cannot be a binary (two state) machine (a quantum computer will be a multi-state one, for instance). It must have many levels of excitement (representation of information). The threshold (“all or nothing” firing”) hypothesis must be wrong

Redundancy must be evident in all the aspects and dimensions of the brain and its activities: the hardware (different centres will perform similar tasks), communications (information transfer channels will be replicated and the same information will be simultaneously transferred over more than one as a basis for comparison), retrieval (data excitation will happen in a few spots at the same time) and usage of obtained data (through working, “upper” memory).

The basic concept of the working of the brain must be the comparison of “representation elements” to “models of the world”. Thus, a coherent picture is obtained which allows for predictions and for manipulation of the environment in effective, result producing ways.

Many of the functions solved by the brain must be recursive. To a large extent, we could even half expect to find that we can reduce all the activities of the brain to computational, mechanically solvable, recursive functions. Should this happen, the brain will come to be regarded as a Turing Machine and the wildest dreams of Artificial Intelligence will come true. Until such time, however, a strong recursive streak should be evident in the operations of this magnificent contraption inside our heads.

The brain must be a learning, self organizing, entity.

Only if these six requirement are cumulatively met – can we say that the software metaphor is a strong one. Otherwise, we should be forced to neglect it in favour of a stronger one.

The brain is a paranoiac machine governed by Murphy’s Laws. It assumes the worst, prepares for it and takes no chances. Precariously balanced, materially delicate, in charge of life itself it can – and does – take no chances.

Other articles

Mind And Machine Essay Research Paper Mind

Mind And Machine Essay Research Paper Mind

Mind And Machine Essay, Research Paper

Mind and Machine: The Essay

Technology has traditionally evolved as the result of human needs. Invention, when prized and rewarded, will invariably rise-up to meet the free market demands of society. It is in this realm that Artificial Intelligence research and the resultant expert systems have been forged.

Much of the material that relates to the field of Artificial Intelligence deals with human psychology and the nature of consciousness. Exhaustive debate on consciousness and the possibilities of consciousnessness in machines has adequately, in my opinion, revealed that it is most unlikely that we will ever converse or interract with a machine of artificial consciousness.

In John Searle’s collection of lectures, Minds, Brains and Science, arguments centering around the mind-body problem alone is

sufficient to convince a reasonable person that there is no way science will ever unravel the mysteries of consciousness.

Key to Searle’s analysis of consciousness in the context of Artificial Intelligence machines are refutations of strong and weak AI theses. Strong AI Theorists (SATs) believe that in the future, mankind will forge machines that will think as well as, if not better than humans. To them, pesent technology constrains this achievement. The Weak AI Theorists (WATs), almost converse to the SATs, believe that if a machine performs functions that resemble a human’s, then there must be a correlation between it and consciousness. To them, there is no technological impediment to thinking machines, because our most advanced machines already think.

It is important to review Searle’s refutations of these respective theorists’ proposition to establish a foundation (for the purpose of this essay) for discussing the applications of Artificial Intelligence, both now and in the future.

Strong AI Thesis

Strong AI Thesis, according to Searle, can be described in four basic propositions. Proposition one categorizes human thought as the result of computational processes. Given enough computational power, memory, inputs, etc. machines will be able to think, if you believe this proposition. Proposition two, in essence, relegates the human mind to the software bin. Proponents of this proposition believe that humans just happen to have biological computers that run "wetware" as opposed to software. Proposition three, the Turing proposition, holds that if a conscious being can be convinced that, through context-input manipulation, a machine is intelligent, then it is. Proposition four is where the ends will meet the means. It purports that when we are able to finally understand the brain, we will be able to duplicate its functions. Thus, if we replicate the computational power of the mind, we will then understand it.

Through argument and experimentation, Searle is able to refute or severely diminish these propositions. Searle argues that machines may well be able to "understand" syntax, but not the semantics, or meaning communicated thereby.

Esentially, he makes his point by citing the famous "Chinese Room Thought Experiment." It is here he demonstrates that a "computer" (a non-chinese speaker, a book of rules and the chinese symbols) can fool a native speaker, but have no idea what he is saying. By proving that entities don’t have to understand what they are processing to appear as understanding refutes proposition one.

Proposition two is refuted by the simple fact that there are no artificial minds or mind-like devices. Proposition two is thus a matter of science fiction rather than a plausible theory

A good chess program, like my (as yet undefeated) Chessmaster 4000 Trubo refutes proposition three by passing a Turing test. It appears to be intelligent, but I know it beats me through number crunching and symbol manipulation.

The Chessmaster 4000 example is also an adequate refutation of Professor Simon’s fourth proposition: "you can understand a process if you can reproduce it." Because the Software Toolworks company created a program for my computer that simulates the behavior of a grandmaster in the game, doesn’t mean that the computer is indeed intelligent.

There are five basic propositions that fall in the Weak AI Thesis (WAT) camp. The first of these states that the brain, due to its complexity of operation, must function something like a computer, the most sophisticated of human invention. The second WAT proposition states that if a machine’s output, if it were compared to that of a human counterpart appeared to be the result of intelligence, then the machine must be so. Proposition three concerns itself with the similarity between how humans solve problems and how computers do so. By solving problems based on information gathered from their respective surroundings and memory and by obeying rules of logic, it is proven that machines can indeed think. The fourth WAT proposition deals with the fact that brains are known to have computational abilities and that a program therein can be inferred. Therefore, the mind is just a big program ("wetware"). The fifth and final WAT proposition states that, since the mind appears to be "wetware", dualism is valid.

Proposition one of the Weak AI Thesis is refuted by gazing into the past. People have historically associated the state of the art technology of the time to have elements of intelligence and consciousness. An example of this is shown in the telegraph system of the latter part of the last century. People at the time saw correlations between the brain and the telegraph network itself.

Proposition two is readily refuted by the fact that semantical meaning is not addressed by this argument. The fact that a clock can compute and display time doesn’t mean that it has any concept of coounting or the meaning of time.

Defining the nature of rule-following is the where the weakness lies with the fourth proposition. Proposition four fails to again account for the semantical nature of symbol manipulation. Referring to the Chinese Room Thought Experiment best refutes this argument.

By examining the nature by which humans make conscious decisions, it becomes clear that the fifth proposition is an item of

fancy. Humans follow a virtually infinite set of rules that rarely follow highly ordered patterns. A computer may be programmed to react to syntactical information with seeminly semantical output, but again, is it really cognizant?

We, through Searle’s arguments, have amply established that the future of AI lies not in the semantic cognition of data by machines, but in expert systems designed to perform ordered tasks.

Technologically, there is hope for some of the proponents of Strong AI Thesis. This hope lies in the advent of neural networks and the application of fuzzy logic engines.

Fuzzy logic was created as a subset of boolean logic that was designed to handle data that is neither completely true, nor completely false. Intoduced by Dr. Lotfi Zadeh in 1964, fuzzy logic enabled the modelling of uncertainties of natural language.

Dr. Zadeh regards fuzzy theory not as a single theory, but as "fuzzification", or the generalization of specific theories from discrete forms to continuous (fuzzy) forms.

The meat and potatos of fuzzy logic is in the extrapolation of data from seta of variables. A fairly apt example of this is the variable lamp. Conventional boolean logical processes deal well with the binary nature of lights. They are either on, or off. But introduce the variable lamp, which can range in intensity from logically on to logically off, and this is

where applications demanding the application of fuzzy logic come in. Using fuzzy algorithms on sets of data, such as differing intensities of illumination over time, we can infer a comfortable lighting level based upon an analysis of the data.

Taking fuzzy logic one step further, we can incorporate them into fuzzy expert systems. This systems takes collections of data in fuzzy rule format. According to Dr. Lotfi, the rules in a fuzzy logic expert system will usually follow the following simple rule:

"if x is low and y is high, then z is medium".

Under this rule, x is the low value of a set of data (the light is off) and y is the high value of the same set of data (the light is fully on). z is the output of the inference based upon the degree of fuzzy logic application desired. It is logical to determine that based upon the inputs, more than one output (z) may be ascertained. The rules in a fuzzy logic expert system is described as the rulebase.

The fuzzy logic inference process follows three firm steps and sometimes an optional fourth. They are:

1. Fuzzification is the process by which the membership functions determined for the input variables are applied to their true values so that truthfulness of rules may be established.

2. Under inference, truth values for each rule’s premise are calculated and then applied to the output portion of each rule.

3. Composition is where all of the fuzzy subsets of a particular problem are combined into a single fuzzy variable for a particular outcome.

4. Defuzzification is the optional process by which fuzzy data is converted to a crisp variable. In the lighting example, a level of illumination can be determined (such as potentiometer or lux values).

A new form of information theory is the Possibility Theory. This theory is similar to, but independent of fuzzy theory. By evaluating sets of data (either fuzzy or discrete), rules regarding relative distribution can be determined and possibilities can be assigned. It is logical to assert that the more data that’s availible, the better possibilities can be determined.

The application of fuzzy logic on neural networks (properly known as artificial neural networks) will revolutionalize many industries in the future. Though we have determined that conscious machines may never come to fruition, expert systems will certainly gain "intelligence" as the wheels of technological innovation turn.

A neural network is loosely based upon the design of the brain itself. Though the brain is an impossibly intricate and complex, it has

a reasonably understood feature in its networking of neurons. The neuron is the foundation of the brain itself; each one manifests up to 50,000 connections to other neurons. Multiply that by 100 billion, and one begins to grasp the magnitude of the brain’s computational ability.

A neural network is a network of a multitude of simple processors, each of which with a small amount of memory. These processors are connected by uniderectional data busses and process only information addressed to them. A centralized processor acts as a traffic cop for data, which is parcelled-out to the neural network and retrieved in its digested form. Logically, the more processors connected in the neural net, the more powerful the system.

Like the human brain, neural networks are designed to acquire data through experience, or learning. By providing examples to a neural network expert system, generalizations are made much as they are for your children learning about items (such as chairs, dogs, etc.).

Modern neural network system properties include a greatly enhanced computational ability due to the parallelism of their circuitry. They have also proven themselves in fields such as mapping, where minor errors are tolerable, there is alot of example-data, and where rules are generally hard to nail-down.

Educating neural networks begins by programming a "backpropigation of error", which is the foundational operating systems that defines the inputs and outputs of the system. The best example I can cite is the Windows operating system from Microsoft. Of-course, personal computers don’t learn by example, but Windows-based software will not run outside (or in the absence) of Windows.

One negative feature of educating neural networks by "backpropigation of error" is a phenomena known as, "overfitting". "Overfitting" errors occur when conflicting information is memorized, so the neural network exhibits a degraded state of function as a result. At the worst, the expert system may lock-up, but it is more common to see an impeded state of operation. By running programs in the operating shell that review data against a data base, these problems have been minimalized.

In the real world, we are seeing an increasing prevalence of neural networks. To fully realize the potential benefits of neural networks our lives, research must be intense and global in nature. In the course of my research on this essay, I was privy to several institutions and organizations dedicated to the collaborative development of neural network expert systems.

To be a success, research and development of neural networking must address societal problems of high interest and intrigue. Motivating the talents of the computing industry will be the only way we will fully realize the benefits and potential power of neural networks.

There would be no support, naturally, if there was no short-term progress. Research and development of neural networks must be intensive enough to show results before interest wanes.

New technology must be developed through basic research to enhance the capabilities of neural net expert systems. It is generally

acknowledged that the future of neural networks depends on overcoming many technological challenges, such as data cross-talk (caused by radio frequency generation of rapid data transfer) and limited data bandwidth.

Real-world applications of these "intelligent" neural network expert systems include, according to the Artificial Intelligence Center, Knowbots/Infobots and intelligent Help desks. These are primarily easily accessible entities that will host a wealth of data and advice for prospective users. Autonomous vehicles are another future application of intelligent neural networks. There may come a time in the future where planes will fly themselves and taxis will deliver passengers without human intervention. Translation is a wonderful possibility of these expert systems. Imagine the ability to have a device translate your English spoken words into Mandarin Chinese! This goes beyond simple languages and syntactical manipulation. Cultural gulfs in language would also be the focus of such devices.

Through the course of Mind and Machine, we have established that artificial intelligence’s function will not be to replicate the conscious state of man, but to act as an auxiliary to him. Proponents of Strong AI Thesis and Weak AI Thesis may hold out, but the inevitable will manifest itself in the end.

It may be easy to ridicule those proponents, but I submit that in their research into making conscious machines, they are doing the field a favor in the innovations and discoveries they make.

In conclusion, technology will prevail in the field of expert systems only if the philosophy behind them is clear and strong. We should not strive to make machines that may supplant our causal powers, but rather ones that complement them. To me, these expert systems will not replace man – they shouldn’t. We will see a future where we shall increasingly find ourselves working beside intelligent systems.

Реферат: Mind And Machine Essay Research Paper Mind

Mind And Machine Essay, Research Paper

Mind and Machine: The Essay

Technology has traditionally evolved as the result of human needs. Invention, when prized and rewarded, will invariably rise-up to meet the free market demands of society. It is in this realm that Artificial Intelligence research and the resultant expert systems have been forged.

Much of the material that relates to the field of Artificial Intelligence deals with human psychology and the nature of consciousness. Exhaustive debate on consciousness and the possibilities of consciousnessness in machines has adequately, in my opinion, revealed that it is most unlikely that we will ever converse or interract with a machine of artificial consciousness.

In John Searle’s collection of lectures, Minds, Brains and Science, arguments centering around the mind-body problem alone is

sufficient to convince a reasonable person that there is no way science will ever unravel the mysteries of consciousness.

Key to Searle’s analysis of consciousness in the context of Artificial Intelligence machines are refutations of strong and weak AI theses. Strong AI Theorists (SATs) believe that in the future, mankind will forge machines that will think as well as, if not better than humans. To them, pesent technology constrains this achievement. The Weak AI Theorists (WATs), almost converse to the SATs, believe that if a machine performs functions that resemble a human’s, then there must be a correlation between it and consciousness. To them, there is no technological impediment to thinking machines, because our most advanced machines already think.

It is important to review Searle’s refutations of these respective theorists’ proposition to establish a foundation (for the purpose of this essay) for discussing the applications of Artificial Intelligence, both now and in the future.

Strong AI Thesis

Strong AI Thesis, according to Searle, can be described in four basic propositions. Proposition one categorizes human thought as the result of computational processes. Given enough computational power, memory, inputs, etc. machines will be able to think, if you believe this proposition. Proposition two, in essence, relegates the human mind to the software bin. Proponents of this proposition believe that humans just happen to have biological computers that run "wetware" as opposed to software. Proposition three, the Turing proposition, holds that if a conscious being can be convinced that, through context-input manipulation, a machine is intelligent, then it is. Proposition four is where the ends will meet the means. It purports that when we are able to finally understand the brain, we will be able to duplicate its functions. Thus, if we replicate the computational power of the mind, we will then understand it.

Through argument and experimentation, Searle is able to refute or severely diminish these propositions. Searle argues that machines may well be able to "understand" syntax, but not the semantics, or meaning communicated thereby.

Esentially, he makes his point by citing the famous "Chinese Room Thought Experiment." It is here he demonstrates that a "computer" (a non-chinese speaker, a book of rules and the chinese symbols) can fool a native speaker, but have no idea what he is saying. By proving that entities don’t have to understand what they are processing to appear as understanding refutes proposition one.

Proposition two is refuted by the simple fact that there are no artificial minds or mind-like devices. Proposition two is thus a matter of science fiction rather than a plausible theory

A good chess program, like my (as yet undefeated) Chessmaster 4000 Trubo refutes proposition three by passing a Turing test. It appears to be intelligent, but I know it beats me through number crunching and symbol manipulation.

The Chessmaster 4000 example is also an adequate refutation of Professor Simon’s fourth proposition: "you can understand a process if you can reproduce it." Because the Software Toolworks company created a program for my computer that simulates the behavior of a grandmaster in the game, doesn’t mean that the computer is indeed intelligent.

There are five basic propositions that fall in the Weak AI Thesis (WAT) camp. The first of these states that the brain, due to its complexity of operation, must function something like a computer, the most sophisticated of human invention. The second WAT proposition states that if a machine’s output, if it were compared to that of a human counterpart appeared to be the result of intelligence, then the machine must be so. Proposition three concerns itself with the similarity between how humans solve problems and how computers do so. By solving problems based on information gathered from their respective surroundings and memory and by obeying rules of logic, it is proven that machines can indeed think. The fourth WAT proposition deals with the fact that brains are known to have computational abilities and that a program therein can be inferred. Therefore, the mind is just a big program ("wetware"). The fifth and final WAT proposition states that, since the mind appears to be "wetware", dualism is valid.

Proposition one of the Weak AI Thesis is refuted by gazing into the past. People have historically associated the state of the art technology of the time to have elements of intelligence and consciousness. An example of this is shown in the telegraph system of the latter part of the last century. People at the time saw correlations between the brain and the telegraph network itself.

Proposition two is readily refuted by the fact that semantical meaning is not addressed by this argument. The fact that a clock can compute and display time doesn’t mean that it has any concept of coounting or the meaning of time.

Defining the nature of rule-following is the where the weakness lies with the fourth proposition. Proposition four fails to again account for the semantical nature of symbol manipulation. Referring to the Chinese Room Thought Experiment best refutes this argument.

By examining the nature by which humans make conscious decisions, it becomes clear that the fifth proposition is an item of

fancy. Humans follow a virtually infinite set of rules that rarely follow highly ordered patterns. A computer may be programmed to react to syntactical information with seeminly semantical output, but again, is it really cognizant?

We, through Searle’s arguments, have amply established that the future of AI lies not in the semantic cognition of data by machines, but in expert systems designed to perform ordered tasks.

Technologically, there is hope for some of the proponents of Strong AI Thesis. This hope lies in the advent of neural networks and the application of fuzzy logic engines.

Fuzzy logic was created as a subset of boolean logic that was designed to handle data that is neither completely true, nor completely false. Intoduced by Dr. Lotfi Zadeh in 1964, fuzzy logic enabled the modelling of uncertainties of natural language.

Dr. Zadeh regards fuzzy theory not as a single theory, but as "fuzzification", or the generalization of specific theories from discrete forms to continuous (fuzzy) forms.

The meat and potatos of fuzzy logic is in the extrapolation of data from seta of variables. A fairly apt example of this is the variable lamp. Conventional boolean logical processes deal well with the binary nature of lights. They are either on, or off. But introduce the variable lamp, which can range in intensity from logically on to logically off, and this is where applications demanding the application of fuzzy logic come in. Using fuzzy algorithms on sets of data, such as differing intensities of illumination over time, we can infer a comfortable lighting level based upon an analysis of the data.

Taking fuzzy logic one step further, we can incorporate them into fuzzy expert systems. This systems takes collections of data in fuzzy rule format. According to Dr. Lotfi, the rules in a fuzzy logic expert system will usually follow the following simple rule:

"if x is low and y is high, then z is medium".

Under this rule, x is the low value of a set of data (the light is off) and y is the high value of the same set of data (the light is fully on). z is the output of the inference based upon the degree of fuzzy logic application desired. It is logical to determine that based upon the inputs, more than one output (z) may be ascertained. The rules in a fuzzy logic expert system is described as the rulebase.

The fuzzy logic inference process follows three firm steps and sometimes an optional fourth. They are:

1. Fuzzification is the process by which the membership functions determined for the input variables are applied to their true values so that truthfulness of rules may be established.

2. Under inference, truth values for each rule’s premise are calculated and then applied to the output portion of each rule.

3. Composition is where all of the fuzzy subsets of a particular problem are combined into a single fuzzy variable for a particular outcome.

4. Defuzzification is the optional process by which fuzzy data is converted to a crisp variable. In the lighting example, a level of illumination can be determined (such as potentiometer or lux values).

A new form of information theory is the Possibility Theory. This theory is similar to, but independent of fuzzy theory. By evaluating sets of data (either fuzzy or discrete), rules regarding relative distribution can be determined and possibilities can be assigned. It is logical to assert that the more data that’s availible, the better possibilities can be determined.

The application of fuzzy logic on neural networks (properly known as artificial neural networks) will revolutionalize many industries in the future. Though we have determined that conscious machines may never come to fruition, expert systems will certainly gain "intelligence" as the wheels of technological innovation turn.

A neural network is loosely based upon the design of the brain itself. Though the brain is an impossibly intricate and complex, it has

a reasonably understood feature in its networking of neurons. The neuron is the foundation of the brain itself; each one manifests up to 50,000 connections to other neurons. Multiply that by 100 billion, and one begins to grasp the magnitude of the brain’s computational ability.

A neural network is a network of a multitude of simple processors, each of which with a small amount of memory. These processors are connected by uniderectional data busses and process only information addressed to them. A centralized processor acts as a traffic cop for data, which is parcelled-out to the neural network and retrieved in its digested form. Logically, the more processors connected in the neural net, the more powerful the system.

Like the human brain, neural networks are designed to acquire data through experience, or learning. By providing examples to a neural network expert system, generalizations are made much as they are for your children learning about items (such as chairs, dogs, etc.).

Modern neural network system properties include a greatly enhanced computational ability due to the parallelism of their circuitry. They have also proven themselves in fields such as mapping, where minor errors are tolerable, there is alot of example-data, and where rules are generally hard to nail-down.

Educating neural networks begins by programming a "backpropigation of error", which is the foundational operating systems that defines the inputs and outputs of the system. The best example I can cite is the Windows operating system from Microsoft. Of-course, personal computers don’t learn by example, but Windows-based software will not run outside (or in the absence) of Windows.

One negative feature of educating neural networks by "backpropigation of error" is a phenomena known as, "overfitting". "Overfitting" errors occur when conflicting information is memorized, so the neural network exhibits a degraded state of function as a result. At the worst, the expert system may lock-up, but it is more common to see an impeded state of operation. By running programs in the operating shell that review data against a data base, these problems have been minimalized.

In the real world, we are seeing an increasing prevalence of neural networks. To fully realize the potential benefits of neural networks our lives, research must be intense and global in nature. In the course of my research on this essay, I was privy to several institutions and organizations dedicated to the collaborative development of neural network expert systems.

To be a success, research and development of neural networking must address societal problems of high interest and intrigue. Motivating the talents of the computing industry will be the only way we will fully realize the benefits and potential power of neural networks.

There would be no support, naturally, if there was no short-term progress. Research and development of neural networks must be intensive enough to show results before interest wanes.

New technology must be developed through basic research to enhance the capabilities of neural net expert systems. It is generally

acknowledged that the future of neural networks depends on overcoming many technological challenges, such as data cross-talk (caused by radio frequency generation of rapid data transfer) and limited data bandwidth.

Real-world applications of these "intelligent" neural network expert systems include, according to the Artificial Intelligence Center, Knowbots/Infobots and intelligent Help desks. These are primarily easily accessible entities that will host a wealth of data and advice for prospective users. Autonomous vehicles are another future application of intelligent neural networks. There may come a time in the future where planes will fly themselves and taxis will deliver passengers without human intervention. Translation is a wonderful possibility of these expert systems. Imagine the ability to have a device translate your English spoken words into Mandarin Chinese! This goes beyond simple languages and syntactical manipulation. Cultural gulfs in language would also be the focus of such devices.

Through the course of Mind and Machine, we have established that artificial intelligence’s function will not be to replicate the conscious state of man, but to act as an auxiliary to him. Proponents of Strong AI Thesis and Weak AI Thesis may hold out, but the inevitable will manifest itself in the end.

It may be easy to ridicule those proponents, but I submit that in their research into making conscious machines, they are doing the field a favor in the innovations and discoveries they make.

In conclusion, technology will prevail in the field of expert systems only if the philosophy behind them is clear and strong. We should not strive to make machines that may supplant our causal powers, but rather ones that complement them. To me, these expert systems will not replace man – they shouldn’t. We will see a future where we shall increasingly find ourselves working beside intelligent systems.