top of page

Changing Ai Philosophy



Human beings are trying to do is understand what it would mean for a human robot system to act as a rational creature. And so what is truly co - operative IRL is the definition of how a human and robotic system can be rational in the context of a totally perceptible world state. When it comes to re - analysis of the purpose of equalizing AI, the kind of moral philosophy and ethics and other things, which for the empirically inclined rational people can be unpleasant, because you cannot simply take a telescope into the universe and see it as a list of what you do. Human beings will say that the humanities or philosophy do not have such fields as well - defined problems and solutions, because they do not deal with real things in the world or concepts are so vague that problems are somehow invented and illusory. So we started talking about how we can appoint advisers that can bring such a conceptual change and framing, then we went into the science philosophy behind how different models and theories of alignment go. And Eric Drexler has also proposed a new concept of AI in which we eliminate the understanding of powerful AI systems or regular AI systems as an agent, which somehow distracts us from many risk - taking and global catastrophic risk and value matching issues.





The largest project of the fair at this time, says LeCun, is a natural understanding of the language of dialogue systems, which will be the basis for intelligent Facebook voice assistants. And the AI is at the heart of the system, because it must make a difference for users by answering virtually any question, it must have some common sense, says LeCun. For example, the DeepText project, which the company has just unveiled, was a direct implementation of AML's work, which was originally done in an honest attempt to understand text classification and its interpretation with the help of pawns and other deep learning techniques.


Arguments against the basic premise must prove that it is impossible to build a functioning AI system, because there is a practical limit to computer capabilities or that there is a special quality of the human mind needed to think and yet cannot be duplicated by a machine (or by current AI methods).



 

Although probably self - awareness often assumes a little more capacity, a machine that can somehow perceive the meaning of its work in relation to its own condition, but in generality is a question of how it relates to the values or plans of its current existence, how it relates to the value has been given to the value. In present day, scientists and technicians participate in conferences to discuss the potential impact of robots and computers and the impact of a hypothetical possibility of becoming independent and capable of making their own decisions.


So if you think about things like controlling the AI in the same way as you would have thought about controlling earlier transformative technologies such as nuclear technology or so on, it is clear that the AI has very different rules and characteristics, which means that the challenge is different. For example, there were a presidential election US in 2016, and it led to the current administration, and it generated a lot of interest from AI professionals on how to use AI technology, because all of a sudden you had an administration that often had political goals in conflict with the political values of AI researchers.



Attempts to dress up significant achievements of AI with human old flowers do a disservice to the field, raising inappropriate questions and suggesting that there is more than just the eye. Certainly, recent advances in AI are likely to allow many or most of today's jobs to automate, but there is no reason to believe that the historical pattern of job creation will stop. Of course, most AI scientists focus on solving a direct problem, but in the next few decades, it is likely that their systems are well adapted to our social and cultural habits. Science fiction is full of stories about robots running amok, but from an engineering point of view they are design problems, not the unpredictable consequences of tinkering with a supposed natural universal order. There is no such thing as computers operating in limited, well - defined domains, but if we want to use AI systems in a broad sense, we should carefully reassess the purpose, objectives and potential of the field, as little as the general public perceives it.

Recent Posts

See All

Season's greetings from FERTES

My best whishes and thanks to Bridgeway, its community and network for the great experience. Have great and innovative winter holidays,...

Comentarios


bottom of page