Today AI is still limited in scope, called Narrow AI, which means that there is not one general artificial intelligence, like ours, but many, each one specialized in something (applied machine learning, generative, robotics, pattern recognition and more). At this stage, whatever good or bad it does is under our control, so it depends on our ethics and principles, so it can be as good or as bad as who controls it, each one in particular.
Later, general intelligence is expected to come, like that of a humans, but with all the abilities together, and then the advanced one, which is predicted to surpass us.
The key is whether in that time we can resolve that it may have ethics and moral principles, since artificial intelligence without that could be very dangerous, and that is still in our hands. This is why the experts are asking for a pause, as hard as it seems, to try to build the foundations for that ethical AI. There is an AI called Claude, similar to ChatGPT (from Open AI) or Gemini (from Google) or Llama3 (from Meta) or Granite (from IBM) that has been trying to be developed with ethical principles in mind from the beginning. We are all going to have to promote and demand this ethical AI, because as we know, governments and large interests act ethically when we demand it.
Comments