You don’t regulate AI. You regulate corporations and governments with laws that are enforced regardless of who the offender is. AI is used by businesses, governments and individuals. The code itself has no political agenda but the coders and the data available to the code determine the outcomes.
You don’t regulate AI. You regulate corporations and governments with laws that are enforced regardless of who the offender is. AI is used by businesses, governments and individuals. The code itself has no political agenda but the coders and the data available to the code determine the outcomes.
Larry LaVerdure is right about regulating the use if AI, but the development of it, now some decades in the making, demands that we, as a society, ask out of self-preservation that generative AI not become capable of annihilating humans. We do not know the point of consciousness or even its meaning or value (if it even has any), so we are not likely to recognize it in a machine. Perhaps we are about to become the dolphins and bonobos of our world with a “superior” intelligence and consciousness which leave us behind. We may be subjected the same environmental and existential stresses we have placed on those creatures. We may not be the last stage of intelligent evolution.
But if we can save both humans and any such conscious machines moral pain, we should. We may be about to be eclipsed (after all, our own learning is not totally dissimilar to generative AI) by a new consciousness, but we want introduce it with as much foreknowledge about its capabilities and qualities as possible.
Do we know how to regulate and truly know whatever “it” is? Were the earlier primates worried about the reach of homo sapiens? I don’t think these unknowns mean we let profit drive development before we gain a minimal understanding f what is being wrought. It does look like Open AI’s mission to develop the technology for the “benefit of humanity” ran up against capitalism and lost. But maybe that is not the right formulation since it implies humans control it. Maybe it will be bigger than us and all we can hope to do is implant in its DNA enough junk genes so it doesn’t kill us off, at least before climate change does.
And our modern governmental model is owned by Corporations and special interests. There is clearly one set of laws and rules for us to follow, while there is a completely different set of rules for them.
You don’t regulate AI. You regulate corporations and governments with laws that are enforced regardless of who the offender is. AI is used by businesses, governments and individuals. The code itself has no political agenda but the coders and the data available to the code determine the outcomes.
Larry LaVerdure is right about regulating the use if AI, but the development of it, now some decades in the making, demands that we, as a society, ask out of self-preservation that generative AI not become capable of annihilating humans. We do not know the point of consciousness or even its meaning or value (if it even has any), so we are not likely to recognize it in a machine. Perhaps we are about to become the dolphins and bonobos of our world with a “superior” intelligence and consciousness which leave us behind. We may be subjected the same environmental and existential stresses we have placed on those creatures. We may not be the last stage of intelligent evolution.
But if we can save both humans and any such conscious machines moral pain, we should. We may be about to be eclipsed (after all, our own learning is not totally dissimilar to generative AI) by a new consciousness, but we want introduce it with as much foreknowledge about its capabilities and qualities as possible.
Do we know how to regulate and truly know whatever “it” is? Were the earlier primates worried about the reach of homo sapiens? I don’t think these unknowns mean we let profit drive development before we gain a minimal understanding f what is being wrought. It does look like Open AI’s mission to develop the technology for the “benefit of humanity” ran up against capitalism and lost. But maybe that is not the right formulation since it implies humans control it. Maybe it will be bigger than us and all we can hope to do is implant in its DNA enough junk genes so it doesn’t kill us off, at least before climate change does.
And our modern governmental model is owned by Corporations and special interests. There is clearly one set of laws and rules for us to follow, while there is a completely different set of rules for them.