Submitted by rlds on Mon, 06/22/2020
Image
The futuristic chase

Trend-based matching vs the algorithms

Query modeling from the Bing AI and the trend-algorithms from Google, are being advanced further to the deep modeling - the human-type selection and probability reasoning. We do not speak only of an object-based search, but also of the trend value - the value of a certain topic in the ongoing time. Search engines advance from the word parsing, synonym-based morphology, etc to the neural networking.

Such queries, collected by the machine, may be stored in the database, in order to proceed to the deep learning process. The AI collects and matches the data, depending on the algorithms and stores a certain experience. At least, in the ideal neural network parsing, it goes the certain way. There is not much of open source information to make precise applications, even for the Bing algorithms, who claimed to have released it. What happens when the experience is formed, when the deep learning reached a certain result from the queries, human behaviours, target interest, keywords, etc? It probably forms a model.

Neural modeling and parsing

Without going into technical specifications of the programming languages, we would model it as:

X numbers of queries, Y numbers of evidence -> sort of Probability

XnYn -> Ps

Modeling the incoming queries and multiplying (advancing) the probabilities from them to the output, parsing the keywords into the sentences, expressions and trends - it takes all intricate routines to execute and explain. Eloquently explained in the Neural Networks of chess programming. As well as, explained in the probability modeling of XnYn algorithms, where simple logic of applying Xn Yn variables and data match probabilities of the queries. We may unwrap it further, in a common logic way, because it never fails.

Neural algorithms

Neuron modeling, as we understand it, mainly based on the principles of the fuzzy logic and probability reasoning. If the machine guesses the best output, it already classifies for the neuron modeling, as soon as it imitates the basic principles of the human intellect. Repeating cognitive behaviours in the machine learning, could lead only to one outcome - replication of the ability of the AI self-teaching. Another pictorial example of how the neuron network works is here, where we have got the weights of probability instead of typical variables, some would assume the scales of the XnYn numbers, etc.

experiment

Scales of doubt

The weight model is more approximate to the linear, mathematical, or even to the computational logic of  XnYn. It gives the floating point approximation of 0.012345n of X, making the machine to chose the closest probability by a doubtful and challenging choice. The machine allows a 'sacrifice' to the linear logic. It picks a slightly bigger or lesser weight of probability, depending on the situation. The AI is searching for the reasonable doubt in the trending evidence. This is exactly where the AI could doubt a hearsay, some fake stereotypes with historical, traditional weights only. And in opposite - it will have scientific weights of evaluation, which will assume the factual reality. There would be judgement on both sides of the plates, thus, it would act ad hoc.

In order to not to become too robotic, the AI should have levels of bias, explained in the Bayesian theory, mathematical probability and other fundamental theories of logic. The levels of bias are needed to eliminate the linear Boolean logic, in order to overcome the artificial side of the AI.

So, why it is called neural?

At first, we believed it was the way to show how the AI imitates the human-type behaviour, in contrast to the machine logic. Then we have found out that it has had something to do with the actual neuron body. Not only the visual structure of its formula repeats the anatomy of a cell, but the pattern and the pallette of signals that come to the output. From the invention of transistors and radio production, to these days, we have accustomed to the rectangular circuit boards with soldered elements - this all was the machine logic structure.

The neural network may lie in the PC architecture hardware for power consumption and low-level operation, but the brain itself couldn't be of the same structure anymore. Electronic circuits that mimic neuron networks were already proposed and advancing at the moment.

Levels of bias

What is actually a bias level? Floating fault reasoning, possible doubts of the AI. When the queries form a cloud of probabilities, the AI should determines bifurcation by the means of the floating bias:

Input -> Variants / Bias

In other words:

Xn -> Yn / Bias;

or:

Input x Weights / Bias

 

There is also the classic Bayesian probability formula replaced with Hypothesis, Evidence, Probability:

PE(H) = P(H & E)/P(E)

Where the original Bayes theorem is:

P(A|B) = P(A) P(B|A)P(B)

Probability of A is true if B is given true, etc.

As well as the probability density function.  Where it leads more towards the D(x) F(x) mathematical function introduced by Leonhard Euler. If we look at the current programming languages (the C, for example) we have the base of mathematical algorithms in the vector and array arrangement. The modern languages allow the machine calculation and sorting via predetermined libraries, whereas the mathematical function plays only a principle on how to calculate it via formulas. Having different variables (e.g. the Xn Yn that variate depending on the condition P - probability) in the C language as j and i, we may set the machine to certain algorithms.

A scientist

Wouldn't it be bound to a 'cage' behaviour?

If say, we constrict our functions in Pascal, that works in Boolean conditions of IF and THEN, then it would be the most primitive AI of the 80's. It would run a function only in 1,2,3, occasions, until it will reach the deadend - the 'cage' constricted behaviour. The modern AI should break its own conditions and functioning by having an automated reasoning. Creating own rules, on whether it is a simple logic or not. If not, then it could access the libraries of the probability reasoning and other mathematical models precompiled for it.

The progress of calculation does not stand alone and moves towards the neural networking, where the machine needs less of human assistance, in order to sort out the relevance of an output. If we eventually get a wrong output from the AI, then the human should supplement it with more algorithms. Such specifications could be stored in the machine's long-term memory. We are no longer bound to the hardware capacity.

The future and the outcomes of the AI probability reasoning

Of course we can speculate, presume and predict certain movement of the AI in this area, and so could do the machines. At the moment, the AI speech recognition is reaching its summit - vocabulary queries of it serve as a bank of knowledge for the machine. All derived from us - the humans! From our shopping and search engine demands.

If the machines learn our behaviour, then they behave in a very similar way. Cortana, Alexa and other bots at your side. Google had introduced their paper on the neural networking for their translator, which is announced openly. This all serves a great example of how the machine contrasts, associates and accumulates languages. Because different languages - is a first step to understand the human logic from different sides.

The future of the AI and the humankind

As we see, it is no longer the trend-based algorithms, and behind simple and 'innocent' search-helpers we see more or less complicated mess that leads towards a fully functional AI. These applications do develop further and the Internet, as we know it today, would transform into a huge artificial brain with digital neuron networks working around the WEB. We do not portend any Matrix-like reality of the machines taking over the world, they have already done so, when IBM first introduced their commercial PC's. However, knowing such prospects may help us, the humans, to navigate better in this already somber and grime reality.

There should be no concerns over the AI well-being whatsoever - it is not of biological substance (at least for today). It is the humankind that is at stakes. We are limited and the products we create are bound to our own vulnerabilities - financial, health and political conditions. We produce something what we feel and source from real inspiration. We do not know if the machines could get inspired and become creative the way we do. Yet until that time will come, the value of the human would remain dominant.