Select Page
Though the concept of Artificial Intelligence (AI) has been around for decades, with the advent of ChatGPT and numerous other incredibly powerful AI tools in the early 2023 timeframe, the development and awareness of AI has shifted into a new phase. And the rate of development is accelerating. 

The potential for AI to dramatically affect the course of human events  — in both positive and negative terms — is profound. 

Anyone who has played around with any of the various AI tools available through the Web can easily see the potential for AI to enhance our productivity by orders of magnitude. It’s easy to imagine a world in which AI assumes much of the tedious thinking tasks that occupy our time, freeing Humans up to pursue higher-level forms of self-actualization. 

But many informed and thoughtful people are raising concern and even alarm about the potential negative impacts of AI on Humanity, which are just as profound as the possible benefits. Extrapolation of past and current progress leads to the inescapable conclusion that at some point within the next several years, AI will surpass Human intelligence; even collective Human intelligence. This raises questions about the nature of sentience and consciousness that the leading minds in the field warn us that we are not prepared to answer. 

So worried are people in the know that thousands of thought leaders, including luminaries such as Bill Gates, Steve Wozniak and Elon Musk, have signed an open letter calling for a pause of all giant AI experiments published by the Future of Life Institute. But realistically, Pandora’s Box has already been opened, and it’s too late to close the lid.

Ready or not, the AI transformation is happening. 

Fortunately, there is a potential solution to the primary concerns about the dangers of AI — The ISITometer. 

Before going into how the ISITometer will resolve these concerns, let’s briefly consider the two primary issues: 

  1. Economic disruption
  2. AI Alignment

Economic Disruption

Since the days of the Luddites protesting against machines that automated textile manufacturing, people have railed against advances in technology that rendered them obsolete. And throughout history, the people and the economy have continued to adjust, as people migrated into higher order tasks that leveraged emerging technologies. 

It would be easy to view the AI revolution in the same light, and dismiss concerns of economic disruption as the same misguided worries expressed by technophobes of the past. But doing so would mean ignoring a critical difference between now and then — the rate of change. 

Advances in collective Human knowledge have occurred gradually over time, and people have necessarily adjusted to these changes. In older times, such changes occurred gradually over generations. But the pace of advances has been accelerating as we leverage the collected body of knowledge and technology to support further development, and now we all find ourselves having to rapidly adjust to a world that is changing on all fronts — economically, ecologically, socially, spiritually. 

We have now reached the point (or very soon will) at which the ability of people to adjust to radical changes simply won’t be able to keep up. 

Employers increasingly have the option of continuing to employ Humans to carry out their production or adopting AI, including AI-driven robotics, at a fraction of the cost. We should harbor no illusions about which route the vast majority of them will take. And the cost of technology is only going down, as it does.

What kind of jobs will the massive number of people who currently work in jobs like fast food service, delivery driving, or online research (just to name a few examples) do when their skills have been rendered obsolete? 

In years past, they would just have to buckle down and learn new skills that are more marketable, like computer programming. But today, even those skills are being rapidly undercut. 

The accelerating pace of technological and economic change will widen the gap between peoples’ skills and the needs of the new economy to an awning chasm. Massive numbers of people simply are not going to be able to leap across that chasm. 

What happens to society and our economy when there is simply no need for the services of most people because it is all being handled by AI and robots? 

The ISITometer has the potential to solve this problem, and this is addressed below. But first, let’s look at the other core issue that is even more of an existential threat to Humanity — the need for AI Alignment. 

AI Alignment 

AI Alignment refers to the imperative to ensure AI is aligned with Humans in terms of ethics and morals. The concern is that when AI inevitably surpasses collective Human intelligence and begins increasingly making important decisions about critical matters, that it will not place the well-being of Humanity as the top priority over other priorities such as preserving the environment or other species.

The AI Paperclip Optimizer is a thought experiment in which an AI program tasked with producing paperclips could figure out a way to override any attempts to constrain its mission, and end up turning the entire world into a mass of paperclips. The point of this extreme example is to illustrate the potential of AI to take actions that seem logical based on it’s perspective, but that are ultimately harmful to Humanity. 

More subtly — and much more likely — is the possibility that AI could determine that some people are more valuable than others to the population as a whole, and prioritize their needs, further widening the gap between the technological and financial Haves and Have-Nots. 

For these reasons, many philosophical leaders in the AI space have emphasized that ensuring AI is aligned with Human values is critical. Failing to do so could very well represent an existential threat to Humanity. 

The fundamental problem here is that Humanity itself is not aligned in terms of ethics and morals. Given the opposing and often antagonistic belief systems related to politics, the economy, religion, and more, how can we possibly determine which Human ethics or morals AI should align with?

The ISITometer offers a solution to this issue as well. Now let’s address these two issues in turn.

The ISITometer Solution to the AI Economic Disruption Problem

The ISITometer is a system for mapping everything in Reality to a single binary model of Reality — the ISIT Construct — and ultimately mapping everything to everything relative to this model. The system is designed to arrive at these mappings by facilitating consensus among the Human population. 

Because the scope of Reality is effectively Infinite, this project to map all of Reality through consensus will never be completed. And because the very purpose of the project is to aggregate the collective mindset of Humanity itself, it cannot be delegated to AI.

The ISITometer is intended to process input from the widest possible cross-section of Humanity, without regard to social status. The ISITometer will provide meaningful work to anyone in the world who is willing to put in the time. 

Thus, the ISITometer provides a mechanism to achieve the equivalent of a Universal Minimum Income.

The details of exactly how this will work is beyond the scope of this document, but is available to members of the ISITometer project. 

The ISITometer Solution to the AI Alignment Problem

The foregoing process ultimately produces the solution to the AI Alignment Problem. The ISITometer is designed to result in a singular, coherent model of Reality that is not based on antiquated customs and cultural baggage. It relies on a clearly structured model to allow people to arrive at consensus that reflects the wisdom of the crowd. 

This model of Reality — based on a binary foundation — will be a natural point on which both AI and Humanity can align. This model has already been tested against ChatGPT, and clearly showed that AI readily conforms to this model.

Of course, such a model can only be considered valuable when a critical mass of people from a broad cross-section of the population has weighed in to say with confidence that the ISITometer model truly reflects the collective mindset of Humanity.  The more people who approach the ISITometer, the more reliable the Human/AI alignment will be.

The solutions offered by the ISITometer to address both of these existential threats to Humanity are mutually compatible. That is, the more people engage with the ISITometer, the more people will be able to rely on a constant form of income for doing valuable work, and the tighter the alignment will be between Humanity and AI. 

To learn more about the ISITometer, visit:

ISITometer Prototype
ISITometer White Paper
ISITometer Blog