Tag: algorithm

Hypergraphs

The higher-order analogue of a graph is called a hypergraph, and instead of edges, it has “hyperedges.” Purvine and her colleagues analyzed a database of biological responses to viral infections, using hypergraphs to identify the most critical genes involved. They also showed how those interactions would have been missed by the usual pairwise analysis afforded by graph theory. However, generalizing from graphs to hypergraphs quickly gets complicated. There are lots of ways of generalizing this notion of a cut to a hypergraph. But there’s no one clear solution, because a hyperedge could be severed various ways, creating new groups of nodes.

2023-04-13: Another powerful concept only very vaguely related are Hypervectors (though this Github project seems to allow you to play with both)

Hyperdimensional computing promises a new world in which computing is efficient and robust, and machine-made decisions are entirely transparent.
Hyperdimensional computing tolerates errors better, because even if a hypervector suffers significant numbers of random bit flips, it is still close to the original vector. Another advantage of hyperdimensional computing is transparency: The algebra clearly tells you why the system chose the answer it did. The same is not true for traditional neural networks. It’s also compatible with “in-memory computing systems,” which perform the computing on the same hardware that stores data (unlike existing von Neumann computers that inefficiently shuttle data between memory and the central processing unit). Some of these new devices can be analog, operating at very low voltages, making them energy-efficient but also prone to random noise. For von Neumann computing, this randomness is “the wall that you can’t go beyond”. But with hyperdimensional computing, “you can just punch through it.”

NAT traversal

If the IP address is correct, our only unknown is the port. There’s 65535 possibilities… Could we try all of them? At 100 packets/sec, that’s a worst case of 10 minutes to find the right one. It’s better than nothing, but not great. And it really looks like a port scan (because in fairness, it is), which may anger network intrusion detection software.

We can do much better than that, with the help of the birthday paradox. Rather than open 1 port on the hard side and have the easy side try 65535 possibilities, let’s open, say, 256 ports on the hard side (by having 256 sockets sending to the easy side’s ip:port), and have the easy side probe target ports at random.

If we stick with a fairly modest probing rate of 100 ports/sec, 50% the time we’ll get through in under 2 seconds. And even if we get unlucky, 20 seconds in we’re virtually guaranteed to have found a way in, after probing less than 4% of the total search space.

Game of Life at 50

Patterns that didn’t change one generation to the next, Dr. Conway called still lifes — such as the 4-celled block, the 6-celled beehive or the 8-celled pond. Patterns that took a long time to stabilize, he called methuselahs. The tree of Life also includes oscillators, such as the blinker, and spaceships of various sizes (the glider being the smallest). In 2018, there was a much-celebrated discovery of a special kind of spaceship, the first elementary knightship, named Sir Robin. Made of 100s of cells, it moves 2 cells forward and one sideways every 6 generations.

amazing that 50 years later, people are still discovering new life forms.

Imagenet Roulette

The ImageNet researchers attribute the inclusion of offensive and insensitive categories to the overall size of the task, which ultimately involved 50K workers who evaluated 160M candidate images. They also point out that only a fraction of the “person” images were actually used in practice. That’s because references to ImageNet typically mean a smaller version of the data set used in the ImageNet Challenge

AI-generating algorithms

Perhaps the most ambitious scientific quest in human history is the creation of general artificial intelligence, which means AI that is as smart or smarter than humans. The dominant approach in the machine learning community is to attempt to discover each of the pieces required for intelligence, with the implicit assumption that some future group will complete the Herculean task of figuring out how to combine all of those pieces into a complex thinking machine. I call this the “manual AI approach”. This paper describes another exciting path that ultimately may be more successful at producing general AI. It is based on the clear trend in machine learning that hand-designed solutions eventually are replaced by more effective, learned solutions. The idea is to create an AI-generating algorithm (AI-GA), which automatically learns how to produce general AI. 3 Pillars are essential for the approach: (1) meta-learning architectures, (2) meta-learning the learning algorithms themselves, and (3) generating effective learning environments. I argue that either approach could produce general AI first, and both are scientifically worthwhile irrespective of which is the fastest path. Because both are promising, yet the ML community is currently committed to the manual approach, I argue that our community should increase its research investment in the AI-GA approach. To encourage such research, I describe promising work in each of the 3 Pillars. I also discuss AI-GA-specific safety and ethical considerations. Because it may be the fastest path to general AI and because it is inherently scientifically interesting to understand the conditions in which a simple algorithm can produce general AI (as happened on Earth where Darwinian evolution produced human intelligence), I argue that the pursuit of AI-GAs should be considered a new grand challenge of computer science research.

2 Generals problem

1 night in September 2018 the food delivery service Deliveroo went haywire. It sent some customers the same food order several times, and other customers got nothing. In this video, Tom Scott explains that this was a classic example of the “2 Generals” problem, and how Deliveroo’s “Night of the Multiple Orders” could have been mitigated — by using something called an “idempotency token,” which allows a transaction to only happen once.