Showing posts with label Ethics. Show all posts
Showing posts with label Ethics. Show all posts

Thursday, December 22, 2011

Funny AI Ethics collection

http://tvtropes.org/pmwiki/pmwiki.php/Main/AIIsACrapshoot

There are some great examples of AI ethical issues in this collection.  All the old chestnuts.

Wednesday, November 16, 2011

Schizophrenia in AI

http://spectrum.ieee.org/tech-talk/biomedical/diagnostics/researchers-create-a-schizophrenic-computer?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+IeeeSpectrumBiomedical+%28IEEE+Spectrum%3A+Biomedical%29

This article is interesting for lots of reasons... but the main one is the effect of learning speed (hyper learning) on formation of NN.  Does this suggest that an AI could become prone to similar mental health issues as humans due to innate structural elements in the NN model?  This is especially relevant for emergent AI which may be quite poorly formed.  It's something I have not heard anyone talk about... apart from all the "cold logic meets warm humanity" stuff that stems from Hal in 2001: A space oddessy.

It's interesting to suggest that potentially AI may be as fragile as humans are in some ways simply through that shared DNA of neural structures, information overload and the effects of time (aging) on their neural forms.

Along with the issues of hardware decay and software rot... it looks like it may be quite challenging to be in the AI health/tech support business.

What do you do with an AI that has mental health issues?  Can you treat it? Do you require ethics clearance? Are you bound by the same "first do no harm" concepts that are applied inequitably to humans? If you can back up the AI and restore it from backup, then are you bound to treat each of the instances of it with the same respect?  Can you destructively "take one apart" to see whats wrong with it, before applying that knowledge to another instance?

This also raises the issue of what happens if you clone an AI and the clones evolve independent of the original, should they then be treated as individuals ( as they technically are different). This would be reproduction for AI.  The problem is that its then impossible to use the "clone and take apart" strategy as every time you clone the AI, the clone is another (life form and person do not really work here yet)  sentient being.  This kind of logic raises issues for all manner of situations involving backups, storage and duplication which are kinda similar to the DRM arguments of the last decade.

Endless complexity. 

This is all premised on the idea that you can take a snapshot of an AI and preserve it.  This could be infeasible due to the complexity of the structure, or the size of the data or because its dynamic and needs to be "in motion" constantly.  In these cases, then they may be completely "unique" and un-reproducable.  However, it may be possible to create a "seed/egg" from one, which when planted in a suitable data environment could "germinate" and grow another AI, different but related to the "parent". 

If this kind of pattern can exist, then its feasible that all the other fun biota can exist... parasites, symbiotes, vampire AI and such.  Then if the AI get into resource competition, it will give raise to all the various biological strategies that we see. Camouflage, misdirection, concealment, threat displays, aggression etc etc etc.

Endless things to study, catalog and argue about. 

Thursday, November 10, 2011

Telesar V telepresence robot

http://www.theverge.com/2011/11/7/2543995/telesar-v-robot-can-see-feel-and-hear

This is another iteration on a long running topic. Looks like some useful problems have been solved.

Monday, November 7, 2011

Editors + Content Creators = Quality?

http://sparksheet.com/return-of-the-editor-why-human-filters-are-the-future-of-the-web/

This article is interesting and makes a good point. The same point that is being made in academic circles, that peer review and editorial selection is the only way to separate the wheat from the chaff.  Without peer review and examination by the community, it becomes increasingly difficult to stay relevant and in touch, let along maintain "accepted" standards of presentation and language.

The contrary argument that always gets trotted out is that this prevents "new" or radical ideas as they tend to be suppressed (overtly or covertly) by "established" ideas.  While I think this is a danger, its true in any system that contains "vested interests" or "assumptions".  Revolutionary change (great strides) is difficult for any system that is geared toward  evolutionary (many small steps) change.  This is more a factor of the mechanism of the system than the desire or intent of any one person within the system.

So what about algorithmic quality filters? Do they allow or reject revolutionary content?  My guess would be ... it depends! An algorithm is a reflection of the programer(s) who created it. This usually means that it carries all their assumptions and flaws.  More simply put, is the algorithm essentially a black list or white list model.  Algorithms that are smart enough to do semantic analysis are just not yet feasible on the large scale, but hopefully they will be making an entrance soon and go some way to solving this type of problem.

Wednesday, September 28, 2011

Personal Data Stores

Found an interesting list of projects in the personal data store/identity space.

http://owncloud.org/

http://unhosted.org/

http://lockerproject.org/


http://projectdanube.org/
Lots of good ideas here. Well written docs.

https://joindiaspora.com/
This one looks a little raw at the moment but the idea is good.

http://personaldataecosystem.org/
Kind of overview organisation. In a decentralized, disorganized ecosystem...

http://www.idcommons.net/
Interesting group....

http://cyber.law.harvard.edu/research/projectvrm
Project VRM.  Another interesting idea....


Wednesday, July 27, 2011

Ethics vs Culture

http://www.campusreview.com.au/pages/section/article.php?s=News&idArticle=21601

This is an interesting article that highlights the reality of graduates with ethics training meeting the real world and having to make compromises.  And once compromised.... its a slippery slope.

One of the other contributing factors for this kind of situation is that once someone is compromised, society is not forgiving. So there is no forum in which to make a mistake, admit it and get back on the ethical path.  So its easier to try to hide the compromise and once you have a dirty secret to be further compromised.

This same set of problems will occur in every industry where ethics is held in high esteem and is supposed to guide the workers in their choices.  Law enforcement, health, IT, business administration, (not politics) etc...

If we are going to build structures on ethics, then we need mechanisms to allow people who have fallen off the true path a little bit to get back on it quickly and easily, otherwise we set the whole system up to fail.

Yes this kind of thing is exploitable, but I did not suggest that the system should not have a memory....

Tuesday, May 31, 2011

Thursday, May 26, 2011

Crowdsourcing the crowdsource

http://www.economist.com/blogs/babbage/2011/05/repetitive_tasks

This is an interesting article on abuse or exploitation of the Mechanical Turk system for the purpose of abuse or exploitation.  Kind of symmetry that only a self balancing market could develop.  I also see this as the future of online labour recruiting. On one side of the transaction is a scammer trying to use the service to gather money, while on the other side is a scammer trying to use the service directly for profit.  In the middle of the bell curve are the regular jobs and the regular workers (who can pass the validation tests)

There is another beautiful aspect to this system.  It allows robots to work without discrimination.  It should be possible to build a market place where bot writers can write good enough bots to do the jobs that are being sourced out.

This is another fun aspect of globalization and the devaluation of the workforce. Robots will replace the lowest skilled workers first.

Self Drive Cars

http://www.scientificamerican.com/article.cfm?id=google-driverless-robot-car

This is a useful article on Google's self drive cars.  It suggests some of the problems and hints at an interesting ethical dilemma relevant to all "real world" robots when the "saves Lives" justification is used.

  

Tuesday, November 9, 2010

Firesheep plugin

http://codebutler.com/firesheep

This is a little old but still funny. I'm interested in this from a security point of view but also from seeing what the reaction to it is. Its one of those things that could catch fire and take off or disappear into the background noise. Its an interesting experiment on public scrutiny and security by obscurity.  The techniques to exploit this hole have been around since cookies were invented and abused for session management over insecure networks... so far its passed about 1.4 million search results on google using "firesheep -sheep". The top ten pages are all 100% articles on Firesheep so I figure the rest of the results are probably pretty good.  LOL.  Way to shine a light on the issue. Lets see if anything happens.

Edit
The general reaction has been two fold. Firstly all the tech press is generally cheering the political objectives while recommending countermeasures. Secondly the hysterical non technical press is decrying the existance of such a terrible weapon... blah blah blah.

Another interesting aspect is the ecosystem of countermeasure tools that are popping up. BlackSheep and FireShepard are the two that have sprung fully formed to offer a solution for the ignorant. Does that not strike you as suspicious?  I have read that BlackSheep is actually a DDOS attack client which I find much more credible than that it magically has some capacity to reach out and touch a passive sniffer application. The description of how it works is kinda credible but not if you know much about DNS and how FireSheep actually works. Even if its exploiting a weakness in FireSheep, its not actually dealing with the underlying issue that is being highlighted. It would be trivial to rework FireSheep to be impervious to BlackSheep's supposed technique.

As for FireShepard:

http://blogs.forbes.com/andygreenberg/2010/10/28/how-to-screw-with-firesheep-snoops-try-fireshepherd/

This page has a lightweight description of how it claims to work. Again its basically trying to attack a weakness in the Firesheep tool rather than patch the problem that FireSheep is highlighting.  Also FireShepard would probably breach the terms of service of any reasonable network because it works by intermittently flooding the network with rubbish packets. This sort of activity would probably set off all sorts of DOS attack detectors, Intrusion systems and just generally piss off any network admins who caught you using it.  Its the equivalent of turning on the sprinkler system in a whole building to put out a single candle (that may or may not be there). And just consider the chaos if one paranoid user on the network starts talking about it to their co-workers and encourages them to also install it.  You then have multiple people intermittently DOS'ing the network segment. Genius.... (Sarcasm)
The first tool sounds like its a tiny step from being outright scamware if its not already malware. The second sounds like a poorly thought out tool with marginal hope of fixing the problem but much larger potential for getting the user banned or prosecuted.

Nothing has turned up about dealing with false positives or the social consequences of detecting an attacker and how to deal with it ethically or safely has shown up yet.  I would assume that the common witch hunt rules would apply. If you think someone is running a sniffer on the network, you can unilaterally employ the "strike first" approach and burn them publicly so you feel all safe again.  Since there is no actual evidence (unless your facebook profile has been hijacked by a completely incompetent person who signs all their fake posts with their real name... but then how would you even prove that that was their real name?  Endless fun with digital forensics.

So we have a scary mix of paranoia, uncertainty, ignorance, exploitative tool developers, no useful solutions from most of the affected sites and a bubbling pool of anger, distrust and the usual illusion of invulnerability that internet users get when they feel safe and anonymous. Nothing bad could happen here...