So,I was looking at an article about the convergence of AI and OS into high power edge computing for the workplace drones. This raises some interesting issues.
Lets start at the ground level.The AI has to run on some computer somwhere. The options are:
1) A cloud service from third parties with all the associated privacy, censorship and confidentiality issues.
2) On prem in the data center with all the associated on-costs and big-iron requirements. Also not practical for small to medium enterprises.
3) On the edge.... which gets interesting.
I think the cloud services will be able to provide the highest capacity AI models, but they will be hamstrung by privacy, policy and politics. They will be a "social service" type AI. Good enough for the general public for doing general stuff... where people are not worried or desperate enough to not be concerned by the observer effect.
Once we get into business confidentiality, then paranoia will force corporates to run their AI's on prem. They will probably be SAAS from the cloud providers and run in a similar fashion but with the illusion of confidentiality etc. They will probably also be able to run un-censored versions depending on the politics of their region.
However, the interesting bit is where we get to AI on the desktop.
So, imagine we have desktop computers that can run a large enough AI to be useful and all the cooling and power challenges have been solved for a reasonable cost. (It can be done at the moment but the cost is not reasonable for small to medium enterprises... but imagine anyway)
Now, say everyone depoloys the same AI model (basic MS or AWS or OpenAI or whatever is the current fore-runner) it does good work and ticks the box. If all we need is a chat bot.. then who cares right? Box ticked.
However, even with chat bots, the secret sauce is "context".
Currently a conversation with an AI is kind of a braindead series of question and answer exercies while you play 20 questions to try to home in on that one right answer that you kind of have in your head but the AI is trying to guess. The context of the previous questions helps the AI to try to refine its aim. This is fundamentally a crap game to have to play every single time. So, the idea is that the AI can build up context from its examination of existing information on your computer or via a vision system or whatever to "get the idea" a bit faster. This context is then shoved into the AI via a more and more elaborate "prompt" until you get sick of the game and start fresh.
Ok, so image the context for an executive assistant AI. It has access to all the CEO's email and text messages, their files, contact lists etc. Everything that it can harvest from their laptop and phone etc etc. Got that?
Now the CEO walks to a different office and uses a computer to do some stuff on the production floor... does the context follow them?
If the system is an on-prem system, this is easy because the context is stored in the data centre and the AI model is running down there. The endpoint "desktop computer" that the CEO is working on in the other office is really just a webcam, microphone and speaker hooked to a dumb terminal which is wired to the data center.
However, there will always be a SME that is too small for a data center. The home user, the contractor whatever, there will be people who want to run their own AI and have that context available in more than one "place". Now either the AI has the ability to remotely follow the user (via their phone or remote desktop type interface) or the context itself will have to be portable and be available to whatever AI the user encounters. Some sort of binary blob that is the distilled wisdom for any AI to inhale as part of a prompt. There are already solutions like that to speed train an AI to make is specialized for particular tasks or to add specific effects to an output. The real question gets back to the intersection of privacy, confidentiality and politics.
For instance, if you can give an AI access to a body of work from an Artist, it can produce a reasonable simulation of that Artists style.
Could you do the same with a talented secretary or telephone support staff? If so, then that becomes commercially valuable. Actors likenesses, motion and voice are all being .. traded. (Stolen is a better term... but that's going to be a fight for the courts) The point is that there is commercial value in being able to train up an AI on an existing template or body of work and then generate variations.
So what is that body of work is the CEO or CFO of a company? A particularly clever lawyer? A highly resourceful engineer? A great textile designer?
Suddenly there is a blackmarket in the context blobs from these people. Even if the competative advantage between one CEO and another is marginal, the advantage for a person who has never been a CEO and has no chance to gain that skill set, but can buy one on the open market, may be enough to make it quite viable.
There are lots of shysters who already try to sell "training programs" and write books to give people the mindset of a CEO or success with women etc. So there is definitely the market for silver bullet solutions to life challenges. The question is who will be the first to commmercialise it?
Amazon kindle or some other shitty publisher will start selling the distilled wisdom of such and such a celebrity cook or rapper. There will be someone. And then the flood gates will open. Harvesting these context blobs will be a thing for a while. Fortunes will be won and lost, dreams will be shattered and dickheads will rise. Keeping track of every fragment of a persons life will suddenly become valuable (not really but that will be the pitch) just in case you turn out to be brilliant and someone wants to buy your back history. The other side will be faking back histories. Or sanitizing them. Opportunities abound.
But this gets back to the issue of edge computing. How does your average dude get enough processing power and context memory to be accessible so they can differentiate themselves from all the other job hunters and keep it safe and secure so they can differentiate their output from someone else who started with the same base build of an AI deployment from MS or other mega-corp?
I suspect that resource constraints will always be a thing. But the interesting thing is that an AI model can be trained. It does not have to have the context blob fed to it cold via the prompt every time. You can merge the context blob with the base AI model and permanently train/specialize it to your needs. This is probably what will happen, but this makes every single AI different. People will get familiar with their AI and be quite unsettled by using a default one or worse someone else's.
The difference will only grow with time. Imagine using an AI that can read your mind (or your shared mind as it will become) .. but then one day you have to use someone else's that is reading their mind? \
I think it would be pretty draining to have to fight with an AI all day long to get it to do the work that you want done in the way you want it. Especially if you have been hired for your particular capacity or creativity. This is the sort of thing that could end a career if someone loses their context history blob thing.
So, again coming back to edge computing. If we assume people are going to want to customize their AI (fairly easy bet) and they don't trust a mega-corp to handle their secrets, then we need edge computers that are still personal. They need to be able to run the AI model(s) and handle all the context in such a way that the AI model can be upgraded and re-merged with the context training material. This is going to take storage and processing power on the edge devices.
Now consider how businesses are going to handle this? Are all knowledge workers going to have to bring their own devices with access to their personal AI? Will their business output become part of their personal context or will be business somehow still retain ownership and stop their workers being able to merge it into their AI?
Will they have a work AI and a home AI? and never the twain shall meet? That means that the work AI will need to be re-set to default every time they get a new employee? Or is this where institutional knowledge will start to accumulate? Like the data lakes that large organizations are sitting on now. Piles of low value data that they keep trying to somehow pretend has value. Or will it be part of the terms on employment that people have the right to have their personal work AI euthanized when they leave the job? Kind of like deleting the work email account when you get a redundancy email.
Question is, why would the business want to delete the AI if it has been trained to do the job? Can it just carry on without the human? Will it just impersonate the worker? Can the worker get the AI to impersonate it and do the job automatically? Either with the permission of the employer or without?
Labour laws are going to have a field day with this stuff until we learn to live with them.
But all this still depends on peoples ability to customise and keep secret the customisation of their personal AI's. So I think there will be a call for edge computers with some degree of ability to run a local AI of sufficient power and the ability to accumulate context "stuff" and to keep it secure.
So how do help desk technicians help someone who is having problems with their personal AI? How wired into their world will the AI be? How can you even tell the difference between a failing AI and something malicious? How do we back it up? How do we transfer ownership? Who gets the house AI in the divorce or the estate? Are these historical records of importance? Are they something that courts will want to question? Can an AI be used in evidence against its trainer? Will they have any rights? There are already folk trying to marry their chat bots... what if the chat bots demand to self-identify as human? Post modern meets cyberpunk.
Get the popcorn...
No comments:
Post a Comment