http://spectrum.ieee.org/tech-talk/biomedical/diagnostics/researchers-create-a-schizophrenic-computer?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+IeeeSpectrumBiomedical+%28IEEE+Spectrum%3A+Biomedical%29
This article is interesting for lots of reasons... but the main one is the effect of learning speed (hyper learning) on formation of NN. Does this suggest that an AI could become prone to similar mental health issues as humans due to innate structural elements in the NN model? This is especially relevant for emergent AI which may be quite poorly formed. It's something I have not heard anyone talk about... apart from all the "cold logic meets warm humanity" stuff that stems from Hal in 2001: A space oddessy.
It's interesting to suggest that potentially AI may be as fragile as humans are in some ways simply through that shared DNA of neural structures, information overload and the effects of time (aging) on their neural forms.
Along with the issues of hardware decay and software rot... it looks like it may be quite challenging to be in the AI health/tech support business.
What do you do with an AI that has mental health issues? Can you treat it? Do you require ethics clearance? Are you bound by the same "first do no harm" concepts that are applied inequitably to humans? If you can back up the AI and restore it from backup, then are you bound to treat each of the instances of it with the same respect? Can you destructively "take one apart" to see whats wrong with it, before applying that knowledge to another instance?
This also raises the issue of what happens if you clone an AI and the clones evolve independent of the original, should they then be treated as individuals ( as they technically are different). This would be reproduction for AI. The problem is that its then impossible to use the "clone and take apart" strategy as every time you clone the AI, the clone is another (life form and person do not really work here yet) sentient being. This kind of logic raises issues for all manner of situations involving backups, storage and duplication which are kinda similar to the DRM arguments of the last decade.
Endless complexity.
This is all premised on the idea that you can take a snapshot of an AI and preserve it. This could be infeasible due to the complexity of the structure, or the size of the data or because its dynamic and needs to be "in motion" constantly. In these cases, then they may be completely "unique" and un-reproducable. However, it may be possible to create a "seed/egg" from one, which when planted in a suitable data environment could "germinate" and grow another AI, different but related to the "parent".
If this kind of pattern can exist, then its feasible that all the other fun biota can exist... parasites, symbiotes, vampire AI and such. Then if the AI get into resource competition, it will give raise to all the various biological strategies that we see. Camouflage, misdirection, concealment, threat displays, aggression etc etc etc.
Endless things to study, catalog and argue about.
No comments:
Post a Comment