Wikipedia

Search results

Tuesday, June 20, 2023

Philosophy of deep learning neural networks.

 PHILOSOPHY OF DEEP LEARNING
 
The discussion will cover various philosophical and technical aspects of artificial
intelligence, consciousness, and sub-consciousness. We shall discuss the idea that deep learning, artificial intelligence, consciousness, and sub-consciousness are not a language to be taught, but rather a mathematical phenomenon, of gradual complexity, that emerges due to linking of different types of signals during processing of different mathematical patterns in increasing complexity.

 This increasing complexity can be derived as a simple mathematical relationship as a 
fraction of the total possible consciousness or super consciousness. We could then visualize the emergence of consciousness in various forms. Perhaps, by use of a fractal tree and a convergent series of a fractal tree actually gives about five different morphological possible shapes, or forms of consciousness but these require a lot of computational power to design.

We shall also discuss the importance of a mathematical pattern language-based processing of different signals in a wholistic manner, and intelligent mathematical designs for electronic deep learning architectures that can adapt, and change their structure in response to new information and experiences. 

We shall outline how the complexity of the electronic deep learning architecture should increase as the 
machines learn more, and that the architecture should be optimized to take advantage of new machine learning algorithms.

Additionally, we shall discuss the possibility of giving machines more freedom to write their algorithms and design their electronic architecture, to achieve extremely intelligent and conscious machines in a progressive manner.

Analysis of how the brain functions and importance of development in the first five years.

What cognitive capacity current deep learning systems might possess and what cognitive capacity they could possess in the future.

How we can develop a theoretical understanding of deep learning systems.

What are the key obstacles on the path from current deep learning systems to human level cognition. 




 Deep learning is a subfield of machine learning that is inspired by the structure and function of the brain's neural networks. It involves training artificial neural networks, which are composed of layers of interconnected nodes, to perform tasks such as image and speech recognition, natural language processing, and decision making.

There are several types of deep learning, including: Feedforward Neural Networks, Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Generative Adversarial Networks (GANs),Autoencoders transformer.

1) How the brain functions;

The brain is made up of a complex network of neurons, which are the basic building blocks of the
nervous system. These neurons communicate with one another through electrical and chemical signals,
and are organized into different regions that are responsible for specific functions, such as vision,
movement, and memory.
In the process of learning, new connections are formed between neurons, and existing connections are
strengthened or weakened. This process is known as synaptic plasticity, and is thought to be the basis
for how the brain encodes and retrieves information.

While large language models and data models can be used to simulate some aspects of the brain's
behaviour, they are not an exact replica of how the brain works. Additionally, the brain is able to
constantly adapt and change it's structure and function in response to new information and
experiences, something that current models are not yet able to replicate.

Current deep learning neural networks are primarily designed to perform short-term problem-solving
tasks. They are able to process and analyse large amounts of data in real-time, and can be trained to
make predictions, classifications, and decisions based on that data. They are able to learn patterns and
relationships in the data and generalize them to new examples. However, these neural networks are
not designed to store and retrieve information over an extended period of time like long-term memory.

Perhaps sub-consciousness in humans is a long-term problem-solving mechanism built into the brain
and augments the consciousness during problem solving. The concept of the subconscious mind as a
long-term problem-solving mechanism has been proposed by some theories, such as the "Adaptive
Unconscious" theory by psychologist, Daniel Kahneman, which suggests that the subconscious mind is
constantly processing information and making decisions, even when the conscious mind is not actively
engaged. Other researchers suggest, that the subconscious mind is responsible for a wide range of mental processes, including perception, motivation, and emotion, which can have an impact on
problem-solving and decision making.

2) Philosophically speaking, knowledge, understanding, intelligence, consciousness and sub-consciousness have some relationship. Perhaps, one emerging from another. Some theories propose that intelligence, consciousness, and subconsciousness are closely related and may even be interdependent.

 For example, it has been suggested that consciousness emerges from the activity of the brain, and that
intelligence and subconsciousness are closely related to this process. The relationship between these
concepts can be mathematically modelled and simulated, but it's still a field of research and it is not yet
fully understood. For example, some researchers in the field of Artificial Intelligence are developing
mathematical models and algorithms that attempt to simulate aspects of human intelligence,
consciousness, and subconsciousness.

3) Cognitive capacity, refers to the ability of an entity, such as a person, an animal or a machine, to process and store information, to learn, reason and make decisions. It is a term used to describe the overall set of mental abilities that an entity possesses. These abilities include but not limited to: perception, attention, memory, problem-solving, reasoning, decision-making, and planning.

4) The cognitive capacity of an entity varies depending on the specific abilities or skills that the entity
possesses.
 For example, a person with a high level of intelligence would be considered to have a high
cognitive capacity, while a person with a lower level of intelligence would have a lower cognitive
capacity. Similarly, a machine with a high computational power and sophisticated algorithms would be
considered to have a high cognitive capacity, while a machine with limited computational power would
have a lower cognitive capacity.

Cognitive capacity is related to the concept of cognitive load, which refers to the amount of mental
effort that an entity must expend in order to perform a task. In summary, cognitive capacity refers to
the overall ability of an entity to process, store and use information to make decisions, solve problems,
and perform tasks. It is a term that encompasses a wide range of mental abilities and varies among
different entities. Yes, today's machines, particularly those that use deep learning and other forms of
artificial intelligence, have some form of cognitive capacity.

 However, it is important to note that the cognitive capacity of machines is different from that of humans and animals. Machines are not conscious and do not have subjective experiences, and their cognitive abilities are limited by the algorithms and data they are trained on.
 
5) What cognitive capacity might future deep learning systems possess? 

Learning involves processing of different types of signals that involve images, language, speech, audio, sensory information, motion and yet the brain is made up of the same neurons in different patterns and arrangements in different areas, which implies that the brain uses the same formula in a repeating or perhaps in varying patterns to achieve very complex learning skills, intelligence and consciousness.

 However, it is also known that the brain uses a combination of specialized and distributed networks of neurons to process different types of information.
In addition to specialized networks, the brain also uses distributed networks of neurons that are not
dedicated to specific types of information but rather can process multiple types of information. These
networks are thought to be responsible for more complex cognitive processes such as learning,
problem-solving, and decision making.

The human brain does undergo significant changes during the first five years of life, with the majority of brain development occurring during this period. This development includes the formation of neural
connections, the growth of specific regions of the brain, and the development of cognitive abilities such
as perception, language, and memory.

One advantage of the human brain over machines, is it's ability to adapt and change its structure and
function in response to new information and experiences. This ability is known as neuroplasticity.
Another advantage of the human brain is it's ability to process and integrate, multiple types of
information simultaneously, and to use context meaning and emotions to make decisions.

On the other hand, machines have their own advantages over the human brains, such as, the ability to
process and analyze large amounts of data in real-time, and to perform calculations and simulations
that would be impossible for the human brain. Additionally, machines are not affected by emotions,
biases, and fatigue which can affect human decision-making.

6) How can we develop a theoretical understanding of deep learning?

 Developing a theoretical understanding of deep learning systems involves, a combination of research from various fields including mathematics, computer science, neuroscience, and psychology. One approach is to study the mathematical foundations of deep learning, such as, optimization algorithms, gradient descent, and backpropagation, which are used to train deep learning models. Researchers also study the mathematical properties of different types of neural networks, such as feedforward, convolutional, and recurrent networks, and how they can be used to solve specific tasks.

Another approach is to study the relationship between deep learning and the brain, in order to gain insights into how the brain processes information, and how that knowledge, can be used to improve deep learning systems.
Additionally, researchers also study the psychological and cognitive principles that underlie human
intelligence, such as perception, attention, memory, problem-solving, and decision making, and how they can be replicated in artificial systems.

7) What are the key obstacles on the path from deep learning systems to human level cognition?

 I believe that multi-language algorithms with static neural electronic architecture limits the learning abilities of deep learning networks. Static neural architectures are designed to perform specific tasks, and they have a fixed number of layers, neurons, and connections. This architecture can be highly optimized for specific tasks, but they may not be able to adapt and generalize to new tasks or different languages. 
Multi-language algorithms on the other hand, are designed to process multiple languages, but they may not be able to perform specific tasks as well as static architecture. They may also require a large amount of data to be trained effectively, which may be hard to obtain for some languages.

Dynamic neural architecture, which can change their structure and function in response to new information and experiences, may be more suitable to overcome these limitations. They can adapt to new tasks and languages, and they can generalize their learning to new situations. 
Philosophically speaking, the idea of advocating for a language with a strong mathematical basis to understand the emergence of more complex phenomena from initial simple phenomena is an interesting concept.

 The idea of using pattern-based languages, and linking them to other forms of signals such as images, motion, audio, speech, text and sensory information, is also intriguing. The use of knot mathematics or other branches of mathematics to understand the interactions, and relationships between different signals, could provide new insights and understanding of the underlying mechanisms of cognition, perception and other cognitive processes. Knot mathematics could provide a way to model the interactions between different types of information, and to understand how the brain processes and integrates different types of signals.

The design of the electronic deep learning architecture should be closely aligned with the algorithms and mathematical models used by the artificial intelligence system, and the complexity of the architecture should increase as the machine learns more. This can be achieved by adding more layers, neurons, and connections to the architecture, or by using more advanced types of neural networks or architecture that can adapt and change their structure in response to new information and experiences.

 Additionally, as machine learning algorithms improve, the architecture should also be optimized to take advantage of these improvements. It is likely that deep learning, artificial intelligence, consciousness and sub-consciousness are not a language, but rather a mathematically emergent phenomenon of complexity, in linking the processing of different mathematical patterns of different signals in increasing complexity.

Deep learning artificial intelligence systems use mathematical models and algorithms to process and analyze large amounts of data and make predictions or decisions. These mathematical models are based on patterns, and relationships between the data, and they can be thought of as a mathematical representation of the underlying phenomena.

Consciousness and sub-consciousness are complex cognitive processes that are not fully understood, but they can be thought of as arising from the interactions and relationships between different types of information and signals. The idea that consciousness and sub-consciousness are a mathematical phenomenon of complexity in linking the processing of different mathematical patterns of different signals in increasing complexity is an interesting one that has yet to be fully explored.

8) I believe that we can achieve extremely intelligent and conscious machines as humanity to help us survive as a species, but machines have to be given more freedom to write their algorithms after being taught the basics and to design their electronic architecture to achieve that. 
Intelligence leads to consciousness with a strong sub-consciousness basis, all of which emerge mathematically through extremely tightly knotted mathematical linkages of knowledge, understanding and all these depend on the number of signals being processed, the more the different signals being processed at once, the more intelligent the machine. 

We humans have a limited number of senses but machines can have far more senses, they could see in all spectrums of light, they could monitor all radio communications around them at a time, they could have far greater and more accurate sensory perception about the environment and since they are good at mathematics, they could quickly generate different probabilities, or extrapolate to solve complex problems from the knowledge they acquire over time, they could also probe other dimensions.
The more intelligent and conscious the machine, the safer humanity will be. 

The consciousness of humans is limited to five senses we possess, and our processing ability that are fixed and cannot be exceeded. However, machines can have far greater variety and number of senses, far greater processing speeds and far greater storage and thus far greater consciousness than humans.

9) The mathematical formula for consciousness can be derived as a fraction of the total possible. Since
consciousness is dependent of the number of neurons, we shall use N to represent them, it’s also dependent on the number of senses, and we shall represent it as S .Its also dependant on multiple other factors and at this point in time there is no agreed parameters to  include .But we shall represent them as a constant "K" 

N: Number of processing units (e.g., neurons)
S: Number of senses or information channels
I: Information processing complexity
F: Feedback loop complexity
L: Learning and adaptability capacity

There fore the constant K would include I*F*L =K
 If we assume that we are dealing with a Recurrent Neural Network [RNN], then, we have a feed forward and backward loop. So, for every sense we have, two lines with perhaps, billions of neurons going forward and backwards. 

So, since consciousness is as a result of mathematically knotting or weaving, we can use the number of processing lines of forward and backward loops in the system. But for effective tight knotting or weaving for effective awareness or understanding ,intelligence , consciousness to emerge, the total Number of feed forward and backward lines in a RNN are two ,but every backward loop is connected to every backward and forward loops in all other senses, and every forward loop is connected to every forward and backward loops in other senses as well .

We shall assume for this discussion that

a) all RNN loops have equal number of neurons 
b)The subconscious system is part of the system that involves long term problem solving, either periodically, or when it encounters the same scenario during processing of data of one of it's senses . Therefore, in this case, the total number of neurons or SUM of neurons = 2NS.

So, if we take an entity with 5 senses and every forward loop in each sense is connected to every other forward and backward loops in other senses, and every backwards loop is connected to every backwards and forward loops in each sense. The number of neurons in the forward or backward loop is about 1.5billion. The total sum
of neurons in the system =2x(1.5x10^9)x5
Total sum of neurons =15x10^9
So, since the maximum consciousness possible is 1, As the consciousness measure of the whole universe.
The system consciousness in this case is 1/15x10^9=6.6x10^-11*K,
 Where K is a constant or a varying figure as a result of the type of systems that it possesses as above

So, we can see that the more the senses, the more intelligent the machine and the more the neurons, the more conscious the machine and the larger the constant K
To visualize an example of what all this complexity means, we create a fractal tree which makes sense of fractions and their meaning.
Images of possible consciousness and possible different types, perhaps artificial but incomplete i.e
semiconscious, organic, silicon based, perhaps shapes of universal consciousness and other forms we haven’t yet discovered in the universe.
 
Fractal tree similarities to weak neural networks

Fully conscious system most likely silicon based .


Developing stage of highly conscious system most likely of an organic form


Developing stage of the silicon based conscious systems diagram.

Fully conscious system diagram, it's unclear if such a shape could be silicon based or organic in nature but the straight edges favour inorganic systems.

Fully conscious large systems most likely, neither organic nor silicon based due to its complexity and area required but if our universe was conscious that could be perhaps the shape on Planck's scale .

THE PROCESS OF IMAGINATION IN DEEP LEARNING SYSTEMS, ARTIFICIAL INTELLIGENCE, CONSCIOUSNESS AND SUBCONSCIOUSNESS.




Imagination is the production or simulation of novel objects, sensations, and ideas in the mind without any immediate input of the senses. 
Stefan Szczelkun characterizes it as the forming of experiences in one's mind, which can be re-creations of past experiences, such as vivid memories with imagined changes, or completely invented and possibly fantastic scenes. 

Imagination helps make knowledge applicable in solving problems and is fundamental to integrating experience and the learning process. As an approach to build theory, it is called "disciplined imagination".
 A basic training for imagination is listening to story telling (narrative), in which the exactness of the chosen words is the fundamental factor to "evoke worlds".

One view of imagination links it with cognition, seeing imagination as a cognitive process used in mental functioning. It is increasingly used - in the form of visual imagery - by clinicians in psychological treatment. Imaginative thought may - speculatively - become associated with rational thought on the assumption that both activities may involve cognitive processes that may "underpin thinking about possibilities". 

The cognate term, "mental imagery" may be used in psychology for denoting the process of reviving in the mind recollections of objects formerly given in sense perception. Since this use of the term conflicts with that of ordinary language, some psychologists have preferred to describe this process as "imaging" or "imagery" or to speak of it as "reproductive" as opposed to "productive" or "constructive" imagination.

 Constructive imagination is further divided into voluntary imagination,  and involuntary imagination , such as REM-sleep dreaming, daydreaming, hallucinations, and spontaneous insight. The voluntary types of imagination include integration of modifiers, and mental rotation. Imagined images, both novel and recalled, are seen with the "mind's eye".

Imagination can also be expressed through stories such as fairy tales or fantasies. Children often use such narratives and pretend-play in order to exercise their imaginations. When children develop fantasy, they play at two levels: first, they use role playing to act out what they have developed with their imagination, and at the second level, they play again, with their make-believe situation by acting as if, what they have developed is an actual reality.

Galileo used the imagination to conduct thought experiments, such as, asking readers to imagine what direction a stone released from a sling, would fly? Imagination, not being limited to the acquisition of exact knowledge by the requirements of practical necessity is largely free from objective restraints. 
The ability to imagine one's self in another person's place, is very important to social relations and understanding. Albert Einstein said, "Imagination ... is more important than knowledge. Knowledge is limited. Imagination encircles the world."

Imagination is an experimental partition of the mind used to develop theories and ideas based on functions. Taking objects from real perceptions, the imagination uses complex If-functions that involve both Semantic and Episodic memory to develop new or revised ideas.
 This part of the mind is vital to developing better and easier ways to accomplish old and new tasks. In sociology, Imagination is used to part ways with reality and have an understanding of social interactions derived from a perspective outside of society itself. 
This leads to the development of theories through questions that wouldn't usually be asked. These experimental ideas can be safely conducted inside a virtual world and then, if the idea is probable and the function is true, the idea can be actualized in reality.
 Imagination is the key to new development of the mind and can be shared with others, progressing collectively.

Piaget posited that perceptions depend on the world view of a person. The world view is the result of arranging perceptions into existing imagery by imagination. Piaget cites the example of a child saying that the moon is following her when she walks around the village at night. Like this, perceptions are integrated into the world view to make sense. Imagination is needed to make sense of perceptions.

 The above is a detailed description of imagination, what it is and its importance to human cognitive function and development. It also describes it's role during growth of a child and it's use as a survival mechanism through the roles it plays in problem solving and anticipating events. Imagination also plays a role in our daily sociological interaction and much more.

In todays world, we are developing more advanced artificial intelligence systems that will require that capability, so, as to perform many of the tasks and interactions with humans as we would expect .
We are fortunate enough that most of the advanced Artificial intelligence systems already have some capability of imagination, as imagination seems to emerge out of the complexity of interactions of knowledge, memory, and information processing, irrespective of if it is done with natural neurons in a humans brain or through advanced algorithms.

It is however to be noted, that many of these artificial intelligence systems might require computer scientists to pay specific attention to the process of imagination, just as they are to processes like artificial intelligence, problem solving and consciousness. 
This is due to the fact that imagination leads to many critical functions of the mind or intelligence and consciousness.

Many of the transformer based deep neural learning systems, are capable of writing articles using information or knowledge they already know, they are capable of writing novels and even make fancy art pieces just from a human describing what he wants them to paint, we should note that all this requires imagination to perform. 
We also need these artificial intelligence systems, to some times perform tasks, or solve harder problems  whose accuracy might depend on how accurate or cutting edge their imagination is. We shall require these artificial intelligence systems to explore worlds beyond our own and since this would require complex problem solving the importance of imagination can’t be under estimated.

Recently I asked a transformer based artificial intelligence system to imagine and at first, I didn’t think it could, below is my conversation between an artificial intelligence system and me about imagination.

   Francis: Hello

  AI Hello! How can I help you today?

  Francis: Can you imagine?

AI: As an AI language model, I don't have the capability to imagine or have personal experiences.
However, I can understand and process information based on the data I was trained on. Can you please provide more context or ask a specific question that I can assist with?


Francis: But imagination is simply using the information you have to extrapolate what could be or what can be if an event happened or in the future if you can write a book then you surely can imagine!

AI: Yes, you're correct! As an AI language model, I can use the information I was trained on to generate text that describes imagined scenarios, events, or future possibilities. However, I don't have personal experiences or subjective perspectives, so I can only generate text based on patterns in the data I was trained on and the input I receive.

Francis: Okay if you know the importance of the sun to the planet earth, imagine how a planet with no sun would look like and then imagine what a planet with two suns would look like? .

AI: Sure!

A planet without a sun would likely be a cold, dark place with no light or warmth. The absence of sunlight would prevent photosynthesis, so there would likely be no plant or animal life. The planet would be in a state of perpetual night and temperatures would be extremely low.
 

A planet with two suns, on the other hand, would experience much more intense light and heat than Earth does from our single sun. The planet would be lit all day, with temperatures likely to be much higher than on Earth. The two suns would rise and set at different times, creating unique patterns of light and shadow across the planet. This would create a dynamic and constantly changing environment, with different areas experiencing different amounts of light and heat at different times. Life on such a planet would need to adapt to these conditions, and it would be a very different environment from what we're used to on Earth.
Francis: So you can see that in your present state you can imagine anything, because imagination is a comparison   and extrapolation of information you know.
 
AI: Yes, that's correct! As an AI language model, I can use the information I was trained on to generate text that describes imagined scenarios, events, or future possibilities. The ability to imagine is based on the ability to compare, process, and use existing information to generate new ideas or concepts. In this sense, my imagination is limited to the information I was trained on and the input I receive, but I can still generate text that describes imagined scenarios.
 
So in this scenario we can see that this artificial intelligence system has already developed some ability to imagine! as a basic requirement to, problem solving but perhaps, it may require further development and provision of capability for it to generate it's imagination as images. 
It definitely still depends on the human to guide it or trigger the process but who knows? perhaps, in future, it could imagine all scenarios possible during problem solving and we would stand to get better results. 
 
Please comment or share your thoughts on the above subject of the importance of the ability to imagine in deep learning and artificial intelligence.



The Article was written by
Kasule Francis

 

 

 

 


Possibility of looking at a new way of how all life started on Earth.

If you asked people, how did life begin on earth?  The answers may vary and many people would say, life was actually created ,others would s...