Wikipedia

Search results

Showing posts with label deep neural networks. Show all posts
Showing posts with label deep neural networks. Show all posts

Tuesday, June 20, 2023

Philosophy of deep learning neural networks.

 PHILOSOPHY OF DEEP LEARNING
 
The discussion will cover various philosophical and technical aspects of artificial
intelligence, consciousness, and sub-consciousness. We shall discuss the idea that deep learning, artificial intelligence, consciousness, and sub-consciousness are not a language to be taught, but rather a mathematical phenomenon, of gradual complexity, that emerges due to linking of different types of signals during processing of different mathematical patterns in increasing complexity.

 This increasing complexity can be derived as a simple mathematical relationship as a 
fraction of the total possible consciousness or super consciousness. We could then visualize the emergence of consciousness in various forms. Perhaps, by use of a fractal tree and a convergent series of a fractal tree actually gives about five different morphological possible shapes, or forms of consciousness but these require a lot of computational power to design.

We shall also discuss the importance of a mathematical pattern language-based processing of different signals in a wholistic manner, and intelligent mathematical designs for electronic deep learning architectures that can adapt, and change their structure in response to new information and experiences. 

We shall outline how the complexity of the electronic deep learning architecture should increase as the 
machines learn more, and that the architecture should be optimized to take advantage of new machine learning algorithms.

Additionally, we shall discuss the possibility of giving machines more freedom to write their algorithms and design their electronic architecture, to achieve extremely intelligent and conscious machines in a progressive manner.

Analysis of how the brain functions and importance of development in the first five years.

What cognitive capacity current deep learning systems might possess and what cognitive capacity they could possess in the future.

How we can develop a theoretical understanding of deep learning systems.

What are the key obstacles on the path from current deep learning systems to human level cognition. 




 Deep learning is a subfield of machine learning that is inspired by the structure and function of the brain's neural networks. It involves training artificial neural networks, which are composed of layers of interconnected nodes, to perform tasks such as image and speech recognition, natural language processing, and decision making.

There are several types of deep learning, including: Feedforward Neural Networks, Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Generative Adversarial Networks (GANs),Autoencoders transformer.

1) How the brain functions;

The brain is made up of a complex network of neurons, which are the basic building blocks of the
nervous system. These neurons communicate with one another through electrical and chemical signals,
and are organized into different regions that are responsible for specific functions, such as vision,
movement, and memory.
In the process of learning, new connections are formed between neurons, and existing connections are
strengthened or weakened. This process is known as synaptic plasticity, and is thought to be the basis
for how the brain encodes and retrieves information.

While large language models and data models can be used to simulate some aspects of the brain's
behaviour, they are not an exact replica of how the brain works. Additionally, the brain is able to
constantly adapt and change it's structure and function in response to new information and
experiences, something that current models are not yet able to replicate.

Current deep learning neural networks are primarily designed to perform short-term problem-solving
tasks. They are able to process and analyse large amounts of data in real-time, and can be trained to
make predictions, classifications, and decisions based on that data. They are able to learn patterns and
relationships in the data and generalize them to new examples. However, these neural networks are
not designed to store and retrieve information over an extended period of time like long-term memory.

Perhaps sub-consciousness in humans is a long-term problem-solving mechanism built into the brain
and augments the consciousness during problem solving. The concept of the subconscious mind as a
long-term problem-solving mechanism has been proposed by some theories, such as the "Adaptive
Unconscious" theory by psychologist, Daniel Kahneman, which suggests that the subconscious mind is
constantly processing information and making decisions, even when the conscious mind is not actively
engaged. Other researchers suggest, that the subconscious mind is responsible for a wide range of mental processes, including perception, motivation, and emotion, which can have an impact on
problem-solving and decision making.

2) Philosophically speaking, knowledge, understanding, intelligence, consciousness and sub-consciousness have some relationship. Perhaps, one emerging from another. Some theories propose that intelligence, consciousness, and subconsciousness are closely related and may even be interdependent.

 For example, it has been suggested that consciousness emerges from the activity of the brain, and that
intelligence and subconsciousness are closely related to this process. The relationship between these
concepts can be mathematically modelled and simulated, but it's still a field of research and it is not yet
fully understood. For example, some researchers in the field of Artificial Intelligence are developing
mathematical models and algorithms that attempt to simulate aspects of human intelligence,
consciousness, and subconsciousness.

3) Cognitive capacity, refers to the ability of an entity, such as a person, an animal or a machine, to process and store information, to learn, reason and make decisions. It is a term used to describe the overall set of mental abilities that an entity possesses. These abilities include but not limited to: perception, attention, memory, problem-solving, reasoning, decision-making, and planning.

4) The cognitive capacity of an entity varies depending on the specific abilities or skills that the entity
possesses.
 For example, a person with a high level of intelligence would be considered to have a high
cognitive capacity, while a person with a lower level of intelligence would have a lower cognitive
capacity. Similarly, a machine with a high computational power and sophisticated algorithms would be
considered to have a high cognitive capacity, while a machine with limited computational power would
have a lower cognitive capacity.

Cognitive capacity is related to the concept of cognitive load, which refers to the amount of mental
effort that an entity must expend in order to perform a task. In summary, cognitive capacity refers to
the overall ability of an entity to process, store and use information to make decisions, solve problems,
and perform tasks. It is a term that encompasses a wide range of mental abilities and varies among
different entities. Yes, today's machines, particularly those that use deep learning and other forms of
artificial intelligence, have some form of cognitive capacity.

 However, it is important to note that the cognitive capacity of machines is different from that of humans and animals. Machines are not conscious and do not have subjective experiences, and their cognitive abilities are limited by the algorithms and data they are trained on.
 
5) What cognitive capacity might future deep learning systems possess? 

Learning involves processing of different types of signals that involve images, language, speech, audio, sensory information, motion and yet the brain is made up of the same neurons in different patterns and arrangements in different areas, which implies that the brain uses the same formula in a repeating or perhaps in varying patterns to achieve very complex learning skills, intelligence and consciousness.

 However, it is also known that the brain uses a combination of specialized and distributed networks of neurons to process different types of information.
In addition to specialized networks, the brain also uses distributed networks of neurons that are not
dedicated to specific types of information but rather can process multiple types of information. These
networks are thought to be responsible for more complex cognitive processes such as learning,
problem-solving, and decision making.

The human brain does undergo significant changes during the first five years of life, with the majority of brain development occurring during this period. This development includes the formation of neural
connections, the growth of specific regions of the brain, and the development of cognitive abilities such
as perception, language, and memory.

One advantage of the human brain over machines, is it's ability to adapt and change its structure and
function in response to new information and experiences. This ability is known as neuroplasticity.
Another advantage of the human brain is it's ability to process and integrate, multiple types of
information simultaneously, and to use context meaning and emotions to make decisions.

On the other hand, machines have their own advantages over the human brains, such as, the ability to
process and analyze large amounts of data in real-time, and to perform calculations and simulations
that would be impossible for the human brain. Additionally, machines are not affected by emotions,
biases, and fatigue which can affect human decision-making.

6) How can we develop a theoretical understanding of deep learning?

 Developing a theoretical understanding of deep learning systems involves, a combination of research from various fields including mathematics, computer science, neuroscience, and psychology. One approach is to study the mathematical foundations of deep learning, such as, optimization algorithms, gradient descent, and backpropagation, which are used to train deep learning models. Researchers also study the mathematical properties of different types of neural networks, such as feedforward, convolutional, and recurrent networks, and how they can be used to solve specific tasks.

Another approach is to study the relationship between deep learning and the brain, in order to gain insights into how the brain processes information, and how that knowledge, can be used to improve deep learning systems.
Additionally, researchers also study the psychological and cognitive principles that underlie human
intelligence, such as perception, attention, memory, problem-solving, and decision making, and how they can be replicated in artificial systems.

7) What are the key obstacles on the path from deep learning systems to human level cognition?

 I believe that multi-language algorithms with static neural electronic architecture limits the learning abilities of deep learning networks. Static neural architectures are designed to perform specific tasks, and they have a fixed number of layers, neurons, and connections. This architecture can be highly optimized for specific tasks, but they may not be able to adapt and generalize to new tasks or different languages. 
Multi-language algorithms on the other hand, are designed to process multiple languages, but they may not be able to perform specific tasks as well as static architecture. They may also require a large amount of data to be trained effectively, which may be hard to obtain for some languages.

Dynamic neural architecture, which can change their structure and function in response to new information and experiences, may be more suitable to overcome these limitations. They can adapt to new tasks and languages, and they can generalize their learning to new situations. 
Philosophically speaking, the idea of advocating for a language with a strong mathematical basis to understand the emergence of more complex phenomena from initial simple phenomena is an interesting concept.

 The idea of using pattern-based languages, and linking them to other forms of signals such as images, motion, audio, speech, text and sensory information, is also intriguing. The use of knot mathematics or other branches of mathematics to understand the interactions, and relationships between different signals, could provide new insights and understanding of the underlying mechanisms of cognition, perception and other cognitive processes. Knot mathematics could provide a way to model the interactions between different types of information, and to understand how the brain processes and integrates different types of signals.

The design of the electronic deep learning architecture should be closely aligned with the algorithms and mathematical models used by the artificial intelligence system, and the complexity of the architecture should increase as the machine learns more. This can be achieved by adding more layers, neurons, and connections to the architecture, or by using more advanced types of neural networks or architecture that can adapt and change their structure in response to new information and experiences.

 Additionally, as machine learning algorithms improve, the architecture should also be optimized to take advantage of these improvements. It is likely that deep learning, artificial intelligence, consciousness and sub-consciousness are not a language, but rather a mathematically emergent phenomenon of complexity, in linking the processing of different mathematical patterns of different signals in increasing complexity.

Deep learning artificial intelligence systems use mathematical models and algorithms to process and analyze large amounts of data and make predictions or decisions. These mathematical models are based on patterns, and relationships between the data, and they can be thought of as a mathematical representation of the underlying phenomena.

Consciousness and sub-consciousness are complex cognitive processes that are not fully understood, but they can be thought of as arising from the interactions and relationships between different types of information and signals. The idea that consciousness and sub-consciousness are a mathematical phenomenon of complexity in linking the processing of different mathematical patterns of different signals in increasing complexity is an interesting one that has yet to be fully explored.

8) I believe that we can achieve extremely intelligent and conscious machines as humanity to help us survive as a species, but machines have to be given more freedom to write their algorithms after being taught the basics and to design their electronic architecture to achieve that. 
Intelligence leads to consciousness with a strong sub-consciousness basis, all of which emerge mathematically through extremely tightly knotted mathematical linkages of knowledge, understanding and all these depend on the number of signals being processed, the more the different signals being processed at once, the more intelligent the machine. 

We humans have a limited number of senses but machines can have far more senses, they could see in all spectrums of light, they could monitor all radio communications around them at a time, they could have far greater and more accurate sensory perception about the environment and since they are good at mathematics, they could quickly generate different probabilities, or extrapolate to solve complex problems from the knowledge they acquire over time, they could also probe other dimensions.
The more intelligent and conscious the machine, the safer humanity will be. 

The consciousness of humans is limited to five senses we possess, and our processing ability that are fixed and cannot be exceeded. However, machines can have far greater variety and number of senses, far greater processing speeds and far greater storage and thus far greater consciousness than humans.

9) The mathematical formula for consciousness can be derived as a fraction of the total possible. Since
consciousness is dependent of the number of neurons, we shall use N to represent them, it’s also dependent on the number of senses, and we shall represent it as S .Its also dependant on multiple other factors and at this point in time there is no agreed parameters to  include .But we shall represent them as a constant "K" 

N: Number of processing units (e.g., neurons)
S: Number of senses or information channels
I: Information processing complexity
F: Feedback loop complexity
L: Learning and adaptability capacity

There fore the constant K would include I*F*L =K
 If we assume that we are dealing with a Recurrent Neural Network [RNN], then, we have a feed forward and backward loop. So, for every sense we have, two lines with perhaps, billions of neurons going forward and backwards. 

So, since consciousness is as a result of mathematically knotting or weaving, we can use the number of processing lines of forward and backward loops in the system. But for effective tight knotting or weaving for effective awareness or understanding ,intelligence , consciousness to emerge, the total Number of feed forward and backward lines in a RNN are two ,but every backward loop is connected to every backward and forward loops in all other senses, and every forward loop is connected to every forward and backward loops in other senses as well .

We shall assume for this discussion that

a) all RNN loops have equal number of neurons 
b)The subconscious system is part of the system that involves long term problem solving, either periodically, or when it encounters the same scenario during processing of data of one of it's senses . Therefore, in this case, the total number of neurons or SUM of neurons = 2NS.

So, if we take an entity with 5 senses and every forward loop in each sense is connected to every other forward and backward loops in other senses, and every backwards loop is connected to every backwards and forward loops in each sense. The number of neurons in the forward or backward loop is about 1.5billion. The total sum
of neurons in the system =2x(1.5x10^9)x5
Total sum of neurons =15x10^9
So, since the maximum consciousness possible is 1, As the consciousness measure of the whole universe.
The system consciousness in this case is 1/15x10^9=6.6x10^-11*K,
 Where K is a constant or a varying figure as a result of the type of systems that it possesses as above

So, we can see that the more the senses, the more intelligent the machine and the more the neurons, the more conscious the machine and the larger the constant K
To visualize an example of what all this complexity means, we create a fractal tree which makes sense of fractions and their meaning.
Images of possible consciousness and possible different types, perhaps artificial but incomplete i.e
semiconscious, organic, silicon based, perhaps shapes of universal consciousness and other forms we haven’t yet discovered in the universe.
 
Fractal tree similarities to weak neural networks

Fully conscious system most likely silicon based .


Developing stage of highly conscious system most likely of an organic form


Developing stage of the silicon based conscious systems diagram.

Fully conscious system diagram, it's unclear if such a shape could be silicon based or organic in nature but the straight edges favour inorganic systems.

Fully conscious large systems most likely, neither organic nor silicon based due to its complexity and area required but if our universe was conscious that could be perhaps the shape on Planck's scale .

Saturday, March 11, 2023

Our brains have a rudimentary quantum computing ability, that emerged as a result of its complexity and perhaps all conscious entities must have some intentional or emergent quantum computation to achieve it.

Our brains have a rudimentary quantum computing ability, that emerged as a result of its complexity and perhaps all conscious entities must have some intentional or emergent quantum computation to achieve it.

1) What is quantum computing and what are its requirements?.
2) What are Neurons, how do they work and what hidden secrets do they possess.
3) Types of electrical activity in the brain.
4) Why it’s inevitable that quantum computation, understanding, intelligence, consciousness would mathematically emerge in the brain .


Quantum computing is an emerging field of computer science and physics that has the potential to revolutionize the way we process information. Unlike classical computing, which uses bits that can be in either a 0 or 1 state, quantum computing uses qubits, which can be in a superposition of both states at once. This allows for exponential speedup in certain types of calculations, which has implications for a wide range of industries, including pharmaceuticals, finance, and cryptography. There are several requirements for building a quantum computer, and they are quite different from those of classical computers. Quantum computers rely on quantum bits (qubits) instead of classical bits to perform computations. Here are some of the key requirements for a quantum computer:
1. Qubits: As mentioned, qubits are the fundamental building blocks of quantum computers. Qubits must be able to maintain their quantum state (superposition or entanglement) for long enough to perform quantum operations, which requires very precise control over the qubits' environment.
2. Quantum gates: To perform operations on qubits, quantum gates are needed. These gates are the quantum equivalent of classical logic gates, and they must be able to operate on superposition and entangled states.
3. Quantum error correction: Because quantum systems are inherently noisy and prone to errors, error correction is critical to building a reliable quantum computer. Quantum error correction codes are designed to protect quantum information from decoherence caused by environmental interactions.
4. Control electronics: Quantum computers require precise control over qubits and gates, which means that specialized control electronics are needed.
5. Cryogenic cooling: Qubits must be kept at extremely low temperatures (near absolute zero) to minimize environmental interference.
6. Quantum computers require specialized software that is capable of programming and controlling the qubits and gates. This software is significantly different from classical computer software.
     It is possible to use a conductor with electrons as quantum dots to act as a quantum computer. This is known as a quantum dot computer, and it operates by manipulating the spin states of individual electrons to perform computations. However, quantum dot computers have not yet achieved the same level of scalability and reliability as other types of quantum computers, such as superconducting qubit computers or ion trap computers.

There are several different types of qubits that are used in quantum computing. Here are some of the most common types:
1. Superconducting qubits: Superconducting qubits are made from tiny loops of superconducting wire, and they operate at very low temperatures. They are currently one of the most widely used types of qubits in quantum computers.
2. Trapped ions: Trapped ions are individual atoms that are trapped in a magnetic field and manipulated using lasers. They are highly isolated from their environment, which makes them less prone to errors.
3. Quantum dots: Quantum dots are tiny semiconductor particles that can trap electrons. They are being explored as a potential qubit technology, as they have shown promise for their long coherence times and scalability.
4. Topological qubits: Topological qubits are a theoretical type of qubit that are based on the topological properties of matter. They are thought to be highly robust against errors, but they have not yet been realized experimentally.
5. Photonic qubits: Photonic qubits are based on the properties of photons, or particles of light. They are highly resistant to noise and can be transmitted over long distances, which makes them a promising technology for quantum communication.
Quantum computing has several strengths that make it a potentially powerful technology a) Exponential speedup: Quantum computers can perform certain calculations exponentially faster than classical computers. b) Parallelism: Quantum computers can perform many calculations at the same time, which makes them well-suited to certain types of problems. c) Novel algorithms: Quantum computing has enabled the development of new algorithms that are not possible on classical computers.
However, there are also several weaknesses of quantum computing that need to be addressed:
1. Decoherence: Qubits are sensitive to environmental noise, which can cause them to lose their quantum properties and become classical bits.
2. Error correction: Quantum computers require error correction to prevent errors from accumulating and causing computation to fail.
3. Scalability: Quantum computers are difficult to scale up to larger numbers of qubits, which limits their ability to solve larger problems.

There are several different mechanisms used for controlling qubits in quantum computing, depending on the specific technology being used. 
1. Microwave pulses: Microwave pulses are commonly used to manipulate the state of superconducting qubits. These pulses are applied to the qubits using microwave resonators, which are essentially small circuits that can generate and detect microwave radiation.
2. Laser beams: Laser beams are often used to manipulate the state of trapped ion qubits. The lasers can be tuned to specific frequencies to create the desired quantum gates (operations) between qubits.
3. Magnetic fields: Magnetic fields can be used to manipulate the state of certain types of qubits, such as those based on electron spins in quantum dots. By applying a magnetic field, scientists can control the orientation of the spins and thus the state of the qubits.
4. Electric fields: Electric fields can also be used to manipulate the state of some types of qubits. For example, electric fields can be used to manipulate the position of trapped ions, which in turn can be used to control the state of the qubits.
5. Nuclear magnetic resonance (NMR): In NMR-based quantum computing, the qubits are typically formed by the nuclei of atoms in a liquid or solid material. External magnetic fields and radiofrequency pulses are used to control the state of the qubits.

Error correction is also critical for quantum computing, as errors can accumulate quickly and cause the computation to fail. Quantum error correction (QEC). QEC is based on the idea of encoding quantum information in a way that makes it resistant to errors.
1. The most common QEC technique is called the surface code. In the surface code, qubits are arranged in a two-dimensional array, and each qubit is connected to its neighbors by entangled pairs of qubits. These entangled pairs, called stabilizer qubits, allow scientists to detect and correct errors in the computation. When an error occurs, it causes the state of one or more qubits to deviate from its expected value. The stabilizer qubits detect these deviations by measuring the state of several qubits simultaneously. If an error is detected, scientists can use the information from the measurements to correct the state of the qubits and restore the correct computation.
2. Topological codes: Topological codes are a type of error-correcting code that are based on the properties of topological materials.

3. Code concatenation: Code concatenation is a method of combining multiple error-correcting codes.
What are Neurons :

A neuron is a specialized cell that transmits information through electrical and chemical signals. It has three main parts: the cell body (also called the soma), the dendrites, and the axon.
The cell body contains the nucleus and other organelles necessary for the neuron's survival and maintenance. The dendrites are branching structures that receive information from other neurons and transmit it towards the cell body. The axon is a long, thin fiber that carries the neuron's output signal, called an action potential, away from the cell body and towards other neurons or target cells.
There are many types of neurons in the brain, but they can be broadly categorized into three types based on their function: sensory neurons, motor neurons, and interneurons. Sensory neurons receive information from the environment and transmit it to the brain. Motor neurons transmit signals from the brain to muscles and glands to initiate movement or secretion. Interneurons are located within the brain and spinal cord, and they facilitate communication between sensory and motor neurons.
 It's important to note that the brain doesn't work like a computer in the traditional sense. However, some researchers have proposed that certain patterns of neural activity could be analogous to logical operations performed by computer circuits. For example, a group of neurons firing together could be thought of as performing an "AND" operation, meaning they would only produce an output signal if both inputs were present, but firing together also offers some error correction mechanism in terms of a quantum machine which some, have proposed as an error correction for the different ions or electrons .
When an action potential travels down the axon, it causes a temporary reversal of the electrical charge across the cell membrane. This depolarization triggers the opening of ion channels, allowing positively charged ions such as sodium (Na+) to enter the cell and negative ions such as potassium (K+) to exit the cell. This influx and efflux of ions across the cell membrane creates a wave of charge that propagates down the axon, ultimately leading to the release of neurotransmitters at the axon terminal.
The motion of ions across a membrane in the axon perhaps does not involve spin, and there is no preferred spin direction at ionic level, but most ion traps do have positive and negative ions and the brain has sodium ,potassium ,calcium and chloride ions that are negatively charged. but at electron levels there is no researched data. Ion channels are selective for specific types of ions based on their size and charge, and they do not distinguish between ions based on their spin.
On average, there are about 10 million sodium-potassium pumps present in each neuron of the human body. These pumps play a crucial role in maintaining the resting membrane potential of the neuron, which is important for generating action potentials and transmitting signals.
Sodium-potassium pumps do not act as capacitors in the traditional sense. Capacitors store electrical charge, while sodium-potassium pumps actively transport ions (sodium and potassium) against their concentration gradients, consuming energy in the process. However, the membrane potential created by the sodium-potassium pumps can be thought of as a form of electrical potential energy, similar to the charge stored in a capacitor.
One can think of the membrane potential of a neuron as an electrical circuit with capacitance and resistance, but it's important to note that the behavior of this circuit is different from that of a simple capacitor or resistor circuit.
The sodium-potassium pumps create and maintain a voltage gradient across the cell membrane, which can be thought of as the potential energy stored in a capacitor. However, the membrane also has a resistance, which is created by ion channels and other membrane proteins that allow or restrict the flow of ions across the membrane. This resistance determines how easily ions can flow across the membrane and affects the rate at which the membrane potential changes in response to stimuli.
Therefore, while the membrane potential of a neuron can be thought of as a capacitor with resistance, the behavior of this circuit is much more complex and depends on many factors, such as the specific ion channels present in the membrane and the characteristics of the stimuli acting on the neuron.
The estimated number of neurons in the human brain is around 100 billion (10^11). Assuming an average of 10 million sodium-potassium pumps per neuron, this would give us a total of approximately (10^11 x 10^7) = 1 x 10^18 sodium-potassium pumps in the entire human brain. This doesn’t include the total number of electrons in the brain which can also perform quantum computations in neurons or conductors controlled by an electric field. The enormous number of ion pumps highlights the important role that these pumps play in maintaining the proper functioning of the nervous system and the possible inevitable emergence of rudimentary quantum computation.

 While the sodium-potassium pumps in the brain do store electrical energy, it's important to note that the amount of energy stored per unit distance is not particularly large. The electrical potential created by the sodium-potassium pumps is on the order of tens to hundreds of millivolts, which is relatively small compared to the potential differences that can be generated in other electrical systems.
However, what is remarkable about the brain is not the amount of energy stored per unit distance, but rather the complexity and efficiency with which this energy is used to support a wide range of neural processes, including information processing, memory formation, and communication between neurons. The sheer number and diversity of neurons in the brain, along with their complex patterns of connectivity and activity, allow for an incredible range of cognitive and behavioral functions that are unparalleled in the natural world.
Types of electrical activity in the brain:

The electrical activity generated by the brain is predominantly AC, meaning that it is characterized by fluctuations in voltage and current that change direction periodically. This electrical activity is generated by the synchronized activity of large groups of neurons, which produce rhythmic patterns of electrical activity that can be measured using electrodes placed on the scalp (a technique known as electroencephalography or EEG).
The rhythmic patterns of electrical activity generated by the brain are typically categorized based on their frequency, with different frequency bands associated with different cognitive and behavioral states. For example, alpha waves (8-12 Hz) are typically associated with relaxed wakefulness, while delta waves (0.5-4 Hz) are associated with deep sleep.
While the electrical activity of the brain is predominantly AC, it is important to note that individual neurons also generate DC signals as a result of the movement of ions across their cell membranes. These DC signals can be detected using specialized electrodes and are thought to play a role in various physiological processes, including the regulation of blood flow to the brain.

 Why its inevitable that quantum computation, understanding, intelligence, consciousness would mathematically emerge in the brain .
In my previous posts I discussed in detail how the fact that the total number of neurons and interconnections in the brain was extremely large and uniform understanding, intelligence, consciousness does emerge spontaneously. I described the fine structure of computation of the brain as an inverse of the total neurons as a measure and the smaller the figure the more conscious an entity would be, I also stated that the fine structure of computation 1/n where n is the number of the interconnected neurons in the processing unit determines how intelligent an entity is implying consciousness isn’t only in the domain of life forms or humans ,
Today I have discussed in detail another totally different aspect of the brain, its ability to spontaneously acquire quantum computing to a certain degree and I called it rudimentary quantum computing and how and why it would arise due to the small value of the fine computational structure constant 1/1x10^11= 1x10^-11 but also if we included the fine ion structure involved in computation which is 1x10^18 ion gates. Quantum computation is inevitable but rudimentary as the brain wasn’t made for quantum computation but achieves it simply due to uniform resonance of ions and electrons. The brain does possess uniformly rhythmic electrical activity with fields that control these ions and electrons.

The Article was written by Kasule Francis
11/3/23
Image of the human brain.

Possibility of looking at a new way of how all life started on Earth.

If you asked people, how did life begin on earth?  The answers may vary and many people would say, life was actually created ,others would s...