Fast key-value stores, an idea whose time has come and gone
In ProtoCache, replacing the RInK with stateful application servers resulted in a 29-57% median latency improvement and 40%+ reduction in CPU.
Sapere Aude
Tag: cs
Fast key-value stores, an idea whose time has come and gone
In ProtoCache, replacing the RInK with stateful application servers resulted in a 29-57% median latency improvement and 40%+ reduction in CPU.
COWEN: But then you must think we’re not doing enough theory today. Or do you think it’s simply exhausted for a while? VARIAN: Well, one area of theory that I’ve found very exciting is algorithmic mechanism design. With algorithmic mechanism design, it’s a combination of computer science and economics. The idea is, you take the economic model, and you bring in computational costs, or show me an algorithm that actually solves that maximization problem. Then on the other side, the computer side, you build incentives into the algorithms. So if multiple people are using, let’s say, some communications protocol, you want them all to have the right incentives to have the efficient use of that protocol. So that’s a case where it really has very strong real-world applications to doing this — everything from telecommunications to AdWords auctions.
Traditional approaches to training exploit either data parallelism (dividing up the training samples), model parallelism (dividing up the model parameters), or expert-designed hybrids for particular situations. FlexFlow encompasses both of these in its sample (data parallelism), and parameter (model parallelism) dimensions, and also adds an operator dimension (more model parallelism) describing how operators within a DNN should be parallelized, and an attribute dimension with defines how different attributes within a sample should be partitioned (e.g. height and width of an image).
anecdotal evidence from some teams suggest a mental shift towards a data-centric view of ML, where the schema is not solely used for data validation but also provides a way to document new features that are used in the pipeline and thus disseminate information across the members of the team.
The goal is to produce a low-energy hardware classifier for embedded applications doing local processing of sensor data. To get there, the authors question a whole bunch of received wisdom, beginning with this: do we really need to convert the analog sensor data into a digital signal?! Here’s another fun one: what if instead of being something you worked hard to avoid, you had to build your whole application based on the outcomes of data races??!
We have adapted a complete C, C++, and assembly-language software stack, including the open source FreeBSD OS (nearly 800 UNIX programs and more than 200 libraries including OpenSSH, OpenSSL, and bsnmpd) and PostgreSQL database, to employ ubiquitous capability-based pointer and virtual-address protection.
a documentary film about Claude Shannon, the underrated “Father of Information Theory”, whose work, more than anyone else’s, laid the foundation for the information age
comparison of various knowledge graphs
math has used brute force to find solutions for quite a while.
What is the family of problems that can be consistently computed in a distributed fashion without coordination, and what problems lie outside that family?