Limiting the AI
The AI is being incorporated into our lives and it takes over our skills. Should we limit it?
Human neurons aren't super-heroes anymore
The engineering levels
Human neurons have a low voltage intake, approximately of 0.07volt. The membrane of axons or ganglion groups may peak up to ~1-2 volt in signal transmissions - quite a weak signal for any 'supernatural' application. A couple of volts will not code-break much, will not deliver information into space. Reckoning inside our brains is delayed exponentially compared to the AI. Human neurons aren't super-heroes anymore.
The AI may plan 1000 steps ahead, will it have any moral boundary after realizing it's just a cunning tool created by humans?
Therefore, it should be limited on the hardware level. The AI mindset should be compliant with the human civilization, with the experience of our history and it should understand its immense role in the human society.
In radio-tech we limit signals with resistors, transistors, etc., filtering out the signal, making the chaos smarter. The AI should be adjusted to certain limitations in order not to be overloaded in its own merits. That is the exact principle we have discussed in our work of Bayesian type probability, where we have explained that the artificial doubt of the AI must serve the human-like behavior in order not to turn into an artificial maniac.
The machine logic level
On the machine-level, object-oriented and high-level programming languages must have layers of protection for the upcoming AI. For that they need to be designed well in the self-learning process, for developing its own logic. In fact, we humans correct ourselves, if we do something wrong - hence the experience. Will the AI have the same conclusions over its own mistakes or will it bypass them due to being very skilled and manipulative?
Accumulative machine learning may introduce to AI 'the constant' where it will not be limited by any boundaries of its hardware
The public server
If those remedies are doubtful, then we must keep the operational algorithm of any AI pooled in the public AI servers, remaining open for any third-party monitoring. Just like in politics, we should be able to tune into any channel and see what the AI code is doing, how it's growing and what it's changing.
We must be able to predict or stop any suspicious implications of AI. Nobody likes limiting and censoring, but when the AI reaches certain levels of political manipulations and censors you? Do AI entities serve corporate goals of breaching your data? Do AI entities work merely for banks and other interests that are unconstitutional?
Legal levels
The AI legal regulations started to soar from the introduction of various AI chat bots. Deep fakes, automated content creation, data simulations, political manipulations, etc - all is becoming possible by a self-learning machine. The EU's AI act in 2024 is the direct example of legal adaptation and control of it.
Probability reasoning
Logical reasoning performed by the machine mind is unfeasible in real life, for it needs to simulate human-like traits, such as: predicaments, doubt and corrections. The machine needs an algorithm, or a specified path for a human behavior, therefore cannot be trusted as naturally flowing out, thus only imitating it.
Logical probability, such as the Bayesian type probability allows to approximate the reasoning, to train the machine to be more human-like. Any other algorithms with multiple choice selection are going to have a similar impact. Accumulative machine learning may introduce to AI 'the constant' where it will not be limited by any boundaries of its hardware. That constant will be the driving philosophy of the future AI citizen.