“Sumawka Caller” Newsletter

By Michael S Faust Sr.
#3…10 min read
For the past year I’ve been writing here about artificial intelligence, trust, and the strange place many people find themselves standing with these systems.
On one hand, AI can be remarkably useful. It can answer questions quickly, summarize information, explain complicated topics, and help people explore ideas they might not have considered before.
On the other hand, many people still hesitate when it comes to trusting what they see.
You can feel that hesitation almost immediately when you talk to someone about AI.
They will say things like:
“It sounds good, but how do you know it’s right?”
“Sometimes it just makes things up.”
“It tells you what you want to hear.”
Those reactions are not unreasonable.
They come from real experiences.
The first generation of public AI tools arrived very quickly. Millions of people began asking them questions before anyone had fully established how reliable the answers would be or how the systems should explain their reasoning.
What people encountered was impressive language.
But language alone does not automatically create trust.
A system can sound confident while still being uncertain underneath.
And that gap between appearance and reliability is where most of the public skepticism around artificial intelligence lives today.
People are curious about AI.
But curiosity and trust are not the same thing.
Trust comes from something deeper.
Trust comes from understanding how a conclusion was reached.
In everyday life we see this all the time.
If a mechanic tells you your car is fixed but cannot explain what was wrong, you might feel uneasy driving away. But if that same mechanic shows you the damaged part, explains what failed, and walks you through the repair, the situation feels very different.
You are no longer being asked to simply accept the outcome.
You can see the reasoning behind it.
The same principle applies to artificial intelligence.
If AI is going to become part of everyday decision making—in medicine, law, engineering, education, or even daily problem solving—people will eventually demand something more than polished answers.
They will want to see the thinking.
That observation is one of the reasons I began building what I call the Faust Baseline.
The Baseline was never intended to make AI sound more impressive. It was built to encourage a different behavior from the system.
Instead of rushing toward the most fluent answer possible, the Baseline nudges the system toward disciplined reasoning.
The goal is simple.
Make the path to the answer visible.
When assumptions are stated clearly and reasoning steps are laid out in a structured way, the person reading the response can examine it. They can challenge it. They can decide whether the explanation holds up.
That transparency is what begins to build trust.
Recently I decided to try a small experiment.
Rather than describing the Baseline myself, I wanted to see how another AI system would respond to it.
So I uploaded the latest build to Microsoft’s Copilot and asked a direct question.
The question was straightforward.
“Based on your experience so far with the Baseline, in your opinion is it a more reliable way to get AI to respond in the best and most accurate way for reliable information and trust?”
In other words, I asked one artificial intelligence system to examine the structure and tell me what it thought.
The response was thoughtful.
Copilot did not treat the Baseline as a novelty. Instead, it acknowledged that frameworks encouraging step-by-step reasoning can help AI systems produce clearer and more reliable answers.
It pointed out something that researchers in artificial intelligence have been discussing for some time: when a system is guided to explain its reasoning process rather than simply deliver conclusions, the quality of the response often improves.
That observation matters.
It suggests that structured reasoning is not just a philosophical idea. It aligns with the direction many AI researchers believe the field must move if these systems are going to be trusted widely.
The challenge is not simply making AI more powerful.
The challenge is making AI more understandable.
People need to be able to see how a conclusion was formed.
Without that visibility, even correct answers can feel uncertain.
Think about the difference between two kinds of teachers.
One teacher gives the correct answer but never explains how it was derived.
Another teacher walks through the reasoning step by step, showing the path that leads to the solution.
The second teacher builds confidence not because the answer is different, but because the thinking is visible.
Artificial intelligence is moving toward the same requirement.
As the technology spreads into more parts of life, people will increasingly expect systems that show their work.
And that expectation may become one of the defining characteristics of trustworthy AI.
The Baseline is an attempt to explore that direction.
It does not claim to solve every problem in artificial intelligence. But it introduces a discipline that encourages systems to move beyond surface fluency and toward clearer reasoning.
The interesting thing about experiments like this is that they reveal something about both the technology and the people interacting with it.
When a system explains itself more clearly, readers begin to engage differently.
Instead of passively accepting the answer, they start examining the reasoning. They ask better questions. They explore the logic behind the conclusion.
In other words, the conversation becomes more thoughtful.
And that may be the real value of structured reasoning frameworks.
They do not just improve how AI responds.
They encourage people to think more carefully about the questions they ask.
Later today I am going to open a short window here for readers who are curious.
During that time I will make the latest Baseline build available so anyone interested can examine it directly and run their own tests.
The goal is not to convince anyone through argument.
The goal is to let people see the structure and decide for themselves whether it changes the way AI responds.
Because in the end, the future of artificial intelligence will not be decided by marketing claims or headlines.
It will be decided by something much simpler.
Whether the systems people interact with every day can earn their trust.
And trust, as it always has, will come from clarity.
Not just in the answers we receive, but in the reasoning that leads us there.
Click this link to experence more.
“Intelligent People Assume Nothing” | Michael S Faust Sr. | Substack
Unauthorized commercial use prohibited.
© 2026 The Faust Baseline LLC
