Tag: ai

AI Ethics

I looked a bit at ethics in neural network science/engineering. As I see it, there are 3 categories of ethical issues specific to the topic rather than being general professional ethics issues:

  • Issues surrounding applications such as privacy, big data, surveillance, killer robots etc.
  • Machine learning allows machines to learn the wrong things.
  • Machines as moral agents or patients.

The first category is important, but I leave that for others to discuss. It is not necessarily linked to neural networks per se, anyway. It is about responsibility for technology and what one works on.

AIs beat IQ tests

Our model can reach the intelligence level between the people with bachelor degrees and those with master degrees

and

it’s taken 60 years of AI research to build a machine in 2012 that can come anywhere close to matching the common sense reasoning of a 4-year old. But the nature of exponential improvements raises the prospect that the next 6 years might produce similarly dramatic improvements. So a question that we ought to be considering with urgency is: what kind of AI machine might we be grappling with in 2018?

AI prospects for biology

Banging through it all, though, to come up with a model that fit the data, tweaking and prodding and adjusting and starting all over when it didn’t work – which is what the evolutionary algorithms did – takes something else: inhuman patience and focus. That’s what computers are really good at, relentless grinding. I can’t call it intelligence, and I can call it artificial intelligence only in the sense that an inflatable palm is an artificial tree. I realize that we do have to call it something, though, but the term “artificial intelligence” probably confuses more than it illuminates.

2022-12-09: Some new hopes for paper mining, but see this caveat.

SciHub has 88m papers, and if we assume that we can extrapolate the Semantic Scholar dataset statistics (2600 words per article) with some paper loss due to old/faulty PDFs, it could be reasonable to expect 200b tokens of scientific knowledge, 10x bigger than the Minerva training set of Arxiv papers (21b tokens). This is a 10x boost in technical knowledge that would exist inside current LLMs.

There will be a universal language of physical science work that does not speak directly to humans. Monolithic cloud labs alone may not be optimal deployment of automated biology in the future. Projects like PyHamilton demonstrate growing open source communities for benchtop automation, and the SayCan collaboration by Google and Everyday Robots is a reminder of how multifunctional robots are steadily progressing (as well as ultralight indoor drones). As the cost curve goes down and the natural-language programmability goes up, there may be an intersection at which it is easier to convert an existing lab environment/protocol into an automated one, rather than to outsource work to a physically separate facility. Or, there may be a steady-state solution that some tasks are optimal for large automated warehouses and others are optimized for more distributed, edge labs. If there is any future of multiple robotic work providers, then interoperability will become a bottleneck, which will motivate a universal formalization of life science work.

RNN unreasonable effectiveness

I took all 474MB of Linux C code and trained several LSTMs. The code looks really quite great overall. Of course, I don’t think it compiles but when you scroll through it feels very much like a giant C code base. Notice that the RNN peppers its code with comments here and there at random. It is also very good at making very few syntactic errors

Human-level object description

Computer vision isn’t just object recognition anymore. Our team has developed a deep learning system that can look at a picture and try to answer specific questions about it, such as “What is the color of the bus?” or even the more complex “What is there on the grass, except the person?” If you had told me 2 years ago we’d be able to do this today I wouldn’t have believed you.

Future filter bubbles

how we’ll deal with a future where you’ll only be exposed to what you want to see.

I think most people, if asked “Is it important to listen to arguments by people who disagree with you?” would answer in the affirmative. I also think most people don’t really do this. Maybe having to set a filter would make people explicitly choose to allow some contrary arguments in. Having done that, people could no longer complain about seeing them – they would feel more of an obligation to read and think about them. And of course, anyone looking for anything more than outrage-bait would choose to preferentially let in high-quality, non-insulting examples of disagreeing views, and so get inspired to think clearly instead of just starting one more rage spiral.