Tag: del.icio.us

Social network suicide

I pay for my flickr account. I will commit social network suicide if they mess them up. I bet you a lot of other people will to. We are getting to an age where we can build the open social network tool set based on open principles, open tagging.

luckily, i already have all the yahoo spam domains in my host file, so i won’t get any of those ads

Firefox

I will switch to mozilla in the next couple days. I have watched Mozilla since about M5 (May 1999), but never found it superior to IE on windows. A few factors made me switch:

Now, I hope that mozilla.org moves to apache.org to enable a really competitive XML stack.
2004-06-13: Mozilla 2.0. Brendan eich: Mozilla 2.0 platform must-haves
2005-03-30: foxylicious. For a while, there were only dorky extensions available for mozilla-based products. More recently however, genuinely useful extensions are popping up left and right. My latest discovery is foxylicious, an extension that syncs your del.icio.us bookmarks into your browser bookmarks, and back. Very cool.
2005-11-26: here is what I am currently using to make browsing safer and less annoying:

  • Use Firefox (duh)
  • Don’t install the Flash plugin
  • Turn off “Allow sites to set cookies” and keep a small whitelist
  • Use NoScript to only allow javascript on a small number of sites
  • Install this hosts file to remove most advertising
  • Use TargetKiller to get rid of pages opening up in new windows
  • Disable Java

It’s amazing how much faster and pleasant the web becomes if you take the garbage out.
2006-09-27: Flux Player for X3D

The free Flux Player is a browser plug-in for viewing and interacting with X3D content and virtual worlds.

2006-10-03: Web3D Vision

AB: Vlad, what’s your vision of “Web3D?” How do you differentiate it from the common “2D web” experience and existing “networked 3D” applications?

VV: I think the 3D web is somewhat of a middle ground between the 2, blending 3D with 2D content. It should also retain one of the main characteristics of early HTML content, which is that you should be able to copy-and-paste from sites that you visit to bring the same content to your own sites.

“Web3D” won’t be about meshes and normal maps and fragment shaders, but it will be about enhancing the current web experience with 3D content, whether that’s for data visualization, for aesthetics, or for novelty factor. Taking advantage of 3D features in UI is also an interesting area, and I hope that putting 3D capabilities alongside HTML will allow for easier experimentation in that area

2006-10-04: big-time refactoring ahead: rdf is out, as is XPCOM. We’ll be lucky if we get all of this by 2008, indeed
2006-10-05: Nightly Tester Tools allows you to make extensions work with ff 2.0 and other good stuff
2006-11-25: Layout.spellcheckDefault. how to enable ff spell check consistently (also for single line controls)
2006-12-15: FF Microformats Mozilla Labs “Microformat Detection Extension for Firefox 2”. with ie8 reportedly having microformats built-in,this is a no-brainer. now, if this helps with the data web.
2007-01-02:

If Mozilla proceeds with this goal for Firefox 3 to be a broker of information, then that will significantly raise the stakes in the browser war again. Microsoft will surely follow and the smaller browsers will innovate around microformats to keep ahead. And it makes perfect sense for the web browser to do brokering, because information is so fluid and ‘small pieces loosely joined’ these days. There’s a best of breed app for every data type – so why not use the best app where possible?

hopefully, ie8 and ff3 will agree on some common microformats features, while competing on the implementation
2006-12-23: Cliche Finder highlights cliches on a page. maybe this could be the next step for the firefox inline spell checker, which i love. with widespread tools like these, maybe the deterioration of language can be countered or even overcome.
2007-01-25: Firebug 1.0

2007-01-29: Firefox EXSLT this should help with xml editors like BXE
2007-02-12: FF offline web apps taking shape. No ui changes, simple to implement. Nice!
2007-04-06: Fullerscreen finally parity with IE. This should be great for kiosks
2007-04-19: Better Gmail

Better Gmail is a compilation of Greasemonkey user scripts that add features to Gmail

Super awesome. I spend a lot of time in gmail, and this really helps
2007-05-06: Firebug Staffing. Nice. Yahoo rarely do something right over there, but this is one of those times
2007-05-19: Whereami. Chregu is taking a per-site approach to geolocation. I think this is a good strategy until we have a standard way of doing this
2007-07-24: YSlow for Firebug. Nice, even though it’s basic. It told me to use a CDN for maps.google.com. Riiight.
2007-07-28: SVG Photos demo

The demo situation for open web technologies is pretty bad; you can find lots of flashy demos all over the place when you look at, say, Silverlight, but it’s hard to find demo-sized chunks of code to show off the capabilities of the web.

GeoFlock

The geoFlock Extension gives you a suite of in-built maps and mapping tools for your web browser. Create and save a topbar map using the addresses or address links you find on web pages, or by manually adding locations. Geotag Flickr photos within the Flock photo uploader, geotag blog posts within the blog editor (including microformat geo class values, view in google maps/google earth links and insertion of quikmaps.com maps).

exposes lots of gmaps features in a ff extension. if our UI weren’t so complicated, maybe users would find all these features themself.
2007-08-06: Only 25% of downloaders actually become ff users. Wow. Say hi to “Firefox Internet” or “Internet with Firefox”

2007-08-10: Unusable Firefox

Firefox locks up several times a day. I found a forum which suggested disabling the anti-phishing functionality. Several people seemed to have benefited from said disabling.

I see the stupid beach ball cursor far too often. You’d think that hanging apps were a thing of the 90s, but apparently not.
2007-08-11: Xvfb + Firefox. How to run jsunit in xvfb properly
2007-10-27: Firefox Kerning. ff 3 turns on kerning and takes the performance hit, but the result looks a lot better. Now we only need non-ugly fonts and subpixel rendering on linux.
2007-11-09: Firefox Memory fragmentation. Wow, bleak picture for firefox. Even with all their fixes for ff3, this still sucks a lot and we’ll probably see leaner browsers take over. Such as khtml based ones.
2007-11-26: Canvas 3D exposes opengl as a ff extension. With kmz / collada example.
2008-01-06: Firefox Viral Campaign

Compared to Internet Explorer users, Firefox users are 21% less likely to be a sales representative or agent at their current place of business.

2008-01-07: Browser Sync. I wish we maintained this better. It would be useful to sync cookie allowlists, at the very least.
2008-01-13: Thoughts on Firefox 3.0. Nice, detailed review
2008-02-05: iGoogleBar. Neat. I like the tighter integration.
2008-02-28: Firefox Throttle. Bandwidth utilization throttling plug-in for Firefox (Windows only)
2008-02-29: FF 3 Performance Boost. At this point networking is the bottleneck, particularly MTU which is still only 1500 bytes max. Jumbo frames, where are you?
2008-03-11: FF 3 Memory Usage

We’re significantly smaller than previous versions of Firefox and other browsers. You can keep the browser open for much longer using much less memory.

this rocks.
2008-03-21: iMacros for Firefox

iMacros was designed to automate the most repetitious tasks on the web.

useful for those retarded and user-hostile online banking sites
2008-04-21: Mozilla JS Shell Server

allows other programs (such as telnet) to establish JS shell connections to a running Mozilla process. This functionality is useful for interactive debugging/development of Mozilla applications, remotely controlling Mozilla, or for automated testing purposes

JS shell examples
2008-04-30: Del.icio.us for FF

Today I’m pleased to announce a beta release of an enhanced version of our Firefox Add-on for del.icio.us that now has full Firefox 3 support

verdict: it sucks. The save a bookmark dialog no longer autocompletes, and they now spam you with an inane “you need cookies to use delicious” on startup. I could care less about the other crap they added when the basics don’t work.
2008-06-06: Slow Firefox

heavy use of Firefox 3 on a Linux system can cause the system as a whole to perform poorly. It’s clearly an issue that the Firefox developers need to fix, even if it’s not entirely their fault.

that fsync nonsense takes all the fun out of the otherwise decent speed of firefox 3.
2008-06-12: FF Microformats API. Here’s hoping microformats finally take off. My code is ready 🙂
2008-08-29: TraceMonkey. More JITing for js. Now if only the DOM were not so slow.
2008-09-02: Tracemonkey is head to head with v8. Interesting times. Now someone should do something about jumbo frames to speed up the networking part, too.
2008-10-06: Geode

an experimental add-on to explore geolocation in Firefox 3 ahead of the implementation of geolocation in a future product release. Geode provides an early implementation of the W3C Geolocation specification so that developers can begin experimenting with enabling location-aware experiences using Firefox 3 today, and users can tell us what they think of the experience it provides. It includes a single experimental geolocation service provider so that any computer with WiFi can get accurate positioning data.

Sigh, why did that take 3 years? This should have been deployed years ago.
2008-11-05: Layout engine comparison. Lots of detailed tables. Looks fairly up to date
2009-01-12: FoxReplace. Quite useful for tedious HTML forms work
2009-01-19: instrumented Firefox. firefox is becoming ever more of a web app, now adding instrumentation / experiments.
2009-02-05: Firefox.next

Some preliminary work has been done on identifying key elements of all 3 projects, and we will continue to refine these plans in order to get a good jump on things as Firefox 3.1 finishes. You can find these first passes here: Personas Ubiquity Prism We absolutely crave feedback. We hope to get to a crisper and more tightly-scoped set of plans over the next month or so, and we’ll continue to point out when there are more changes that we’d like feedback on.

I am decidedly unimpressed. Why bother making the chrome of the browser more interesting, when you can make the browser viewport more interesting and powerful?
2009-05-29: Firefox Trojan

the .NET update automatically installs its own Firefox add-on that is difficult — if not dangerous — to remove, once installed. Annoyances.org, which lists various aspects of Windows that are, well, annoying, says “this update adds to Firefox one of the most dangerous vulnerabilities present in all versions of Internet Explorer: the ability for Web sites to easily and quietly install software on your PC.”

A trojan for .net, only uninstallable via the registry. Getting desperate?
2009-06-11: Mozilla Jetpack. Writing ff extensions with just html, css, js. This has the potential to be huge.
2009-07-06: Multi Process Mozilla

Ben Turner and Chris Jones have borrowed the IPC message-passing and setup code from Chromium.

Better Science

It is time the academic review process is made more transparent. Bertrand Meyer on why reviews could learn from open source and weblogs. If most of your thoughts are in the public record, it affects your thought processes. For the better I think.

It is widely believed that anonymous refereeing helps fairness, by liberating reviewers from the fear that openly stated criticism might hurt their careers. In my experience, the effects of anonymity are worse than this hypothetical damage. Referees too often hide behind anonymity to turn in sloppy reviews; worse, some dismiss contributions unfairly to protect their own competing ideas or products. Even people who are not fundamentally dishonest will produce reviews of unsatisfactory quality out of negligence, laziness or lack of time because they know they can’t be challenged. Putting your name on an assessment forces you to do a decent job.

2004-09-21: Stefano on the dynamics of academic peer review.

The day that you find a more interesting paper in Citeseer than in any IEEE or ACM e-journal, it’s the day you don’t look back. But what would happen next? What about the academic symbiosis with the peer review system? Citeseer uses a Google page-rank-like algorithm for ranking: which is analyzing the properties of the citation network topology to understand which papers are more influential than others. Just like Google does with hyperlinks for web pages, Citeseer does it for bibliographic citations: the result is that peer review is not done by a panel of experts, but by every researcher in the field!!

2005-10-04: Ben Goertzel has some thoughts on how academic papers are stuffed with irrelevant filling, and how this impedes real progress:

what strikes me is how much pomp, circumstance and apparatus academia requires in order to frame even a very small and simple point. References to everything in the literature ever said on any vaguely related topic, detailed comparisons of your work to whatever it is the average journal referee is likely to find important — blah, blah, blah, blah, blah…. A point that I would more naturally get across in 5 pages of clear and simple text winds up being a 30 page paper!

I’m writing some books describing the Novamente AI system — one of them, 600 pages of text, was just submitted to a publisher. The other 2, 300 and 200 pages respectively, should be submitted later this year. Writing these books took a really long time but they are only semi-technical books, and they don’t follow all the rules of academic writing — for instance, the whole 600 page book has a reference list no longer than I’ve seen on many 50-page academic papers, which is because I only referenced the works I actually used in writing the book, rather than every relevant book or paper ever written. I estimate that to turn these books into academic papers would require me to write 60 papers. To sculpt a paper out of text from the book would probably take me 2-7 days of writing work, depending on the particular case. So it would be at least 1 full year of work, probably 2 full years of work, to write publishable academic papers on the material in these books!

The lack of risk-taking is particularly evident in computer science:

Furthermore, if as a computer scientist you develop a new algorithm intended to solve real problems that you have identified as important for some purpose (say, AI), you will probably have trouble publishing this algorithm unless you spend time comparing it to other algorithms in terms of its performance on very easy “toy problems” that other researchers have used in their papers. Never mind if the performance of an algorithm on toy problems bears no resemblance to its performance on real problems. Solving a unique problem that no one has thought of before is much less impressive to academic referees than getting a 2% better solution to some standard “toy problem.” As a result, the whole computer science literature (and the academic AI literature in particular) is full of algorithms that are entirely useless except for their good performance on the simple “toy” test problems that are popular with journal referees.

His first scenario makes me wonder if amateur scientists could again make meaningful contributions to research, combined with a wiki-like process that (hopefully) would identify promising directions better than today’s peer reviews:

And so, those of us who want to advance knowledge rapidly are stuck in a bind. Either generate new knowledge quickly and don’t bother to ram it through the publication mill … or, generate new knowledge at the rate that’s acceptable in academia, and spend 50% of your time wording things politically and looking up references and doing comparative analyzes rather than doing truly productive creative research.

2006-12-31: The trend towards cross-disciplinary research is getting stronger. this is very good news for this dabbler 😉

There is an increasing coalescence of scientific disciplines in many areas. Thus the discovery of the structure of the genome not only required contributions from parts of biology, physics, chemistry, mathematics, and information technology, but in turn it led to further advances in biology, physics, chemistry, technology, medicine, ecology, and even ethics. And all this scientific advance is leading, as it should, to the hopeful betterment of the human condition (as had been also one of the platform promises of the Unity of Science movement, especially in its branch in the Vienna Circle).

Similar developments happen in the physical sciences—a coalescence of particle physics and large-scale astronomy, of physics and biology, and so forth. It is a telling and not merely parochial indicator that ~50% of my 45 colleagues in my Physics Department, owing to their widespread research interests, now have joint appointments with other departments at the University: with Molecular and Cellular Biology, with Mathematics, with Chemistry, with Applied Sciences and Engineering, with History of Science. Just now, a new building is being erected next to our Physics Department. It has the acronym LISE, which stands for the remarkable name, Laboratory of Integrated Science and Engineering. Although in industry, here and there, equivalent labs have existed for years, the most fervent follower of the Unity of Science movement would not have hoped then for such an indicator of the promise of interdisciplinarity. But as the new saying goes, most of the easy problems have been solved, and the hard ones need to be tackled by a consortium of different competences.

2007-04-08: +1, but s/researcher/infoworker/

Delicious is the Rome, Jerusalem, and Paris of my existence as a researcher these days. It’s where I make my friends, how I get the news, and where I go to trade.

2007-04-18: Open Source clinical research

Our goal is to make collaboration and open source come to life in the field of clinical research. With our partners, we will identify specific projects where the sharing of information will lead to better, more accurate research.

2007-10-10: Publication bias: Only positive correlations are published. Time for a big expert system to feed the data from all experiments in.
2007-12-14: Creative Commons

Nature Magazine’s announced that it’s going to share all its human genome papers under Creative Commons Attribution-NonCommercial-ShareAlike licenses.The genomes themselves are not copyrightable and go into a public database, but the papers — which are a vital part of the science — may now be freely copied by any non-commercial publisher.

2009-03-11: Strangest college courses. Can’t let Sarah find this list!

15. Arguing with Judge Judy: Popular ‘Logic’ on TV Judge Shows
14. Underwater Basket Weaving
13. Learning From YouTube
12. Philosophy and Star Trek
11. The Art of Walking
10. Daytime Serials: Family and Social Roles
9. Joy of Garbage
8. The Science of Superheroes
7. Zombies in Popular Media
6. The Science of Harry Potter
5. Cyberporn and Society
4. Simpsons and Philosophy
3. Far Side Entomology
2. Myth and Science Fiction: Star Wars, The Matrix, and Lord of the Rings
1. The Strategy of StarCraft

2009-03-12: Subsidizing intellectually challenged people to go to college just exacerbates the glut of useless degrees.

2010-07-21: Against Tenure

Scholars in their 60s are not producing path-breaking new research, but they are the people that tenure protects. Scholars in their 20s have no academic freedom at all.

2011-10-28: Makes the argument that #ows is people with useless degrees and a lot of student loan debt. the useless degree bubble is definitely one to pop.

But the lower tier of the New Class — the machine by which universities trained young people to become minor regulators and then delivered them into white collar positions on the basis of credentials in history, political science, literature, ethnic and women’s studies — with or without the benefit of law school — has broken down. The supply is uninterrupted, but the demand has dried up. The agony of the students getting dumped at the far end of the supply chain is in large part the OWS. As Above the Law points out, here is “John,” who got out of undergrad, spent a year unemployed and living at home, and is now apparently at University of Vermont law school, with its top ranked environmental law program — John wants to work at a “nonprofit.”

Even more frightening is the young woman who graduated from UC Berkeley, wanting to work in “sustainable conservation.” She is now raising chickens at home, dying wool and knitting knick-knacks to sell at craft fairs. Her husband has been studying criminal justice and EMT — i.e., preparing to work for government in some of California’s hitherto most lucrative positions — but as those work possibilities have dried up, he is hedging with a (sensible) apprenticeship as an electrician. These young people are looking at serious downward mobility, in income as well as status. The prospects of the lower tier New Class semi-professionals are dissolving at an alarming rate. Student loan debt is a large part of its problems, but that’s essentially a cost question accompanying a loss of demand for these professionals’ services.

2013-04-12: You can observe the useless degrees epidemic every day in Williamsburg.

At present, high % of high school graduates are continuing to enroll in college, often taking on punishing levels of debt in the process. As nearly any recent college graduate knows, many of these people are ending up working in lower wage service jobs (Baristas for example). I think it is entirely possible that future high school graduates will begin to look at these outcomes begin shying away from college.

2014-02-09: Everyone going to college is completely wrong.

Student population: now in decline; Admissions: harder & harder to make targets; Finance: banks not willing to take on more student loans; Research: shift from tenure to adjuncts.
I don’t think MOOCs are the solution. The premise that “everyone” should have a university education is wrong, and deprives people of quality apprenticeships.

On top of that, there are massive problems with science itself. Academic literature is a relic of the era of typesetting, modeled on static, irrevocable, publication. A dependency graph would tell us, at a click, which of the pillars of scientific theory are truly load-bearing. Revision control (git anyone?) would allow for a much more mature way to incorporate improvements from anyone.
2014-06-12: From upworthy university:

You NEED to see this hot model (NSFW) of ethnic politics and foreign policy These squirrels were forced to migrate south because of the Ice Age. What happened next WILL SHOCK YOU

2014-07-01: Amusing given how many overeducated people with useless degrees already work at Starbucks. I don’t think more degrees is the answer.

Starbucks will provide a free online college education to 1000s of its workers, without requiring that they remain with the company

2015-02-26: p-hacking

published p-values cluster suspiciously around this 0.05 level, suggesting that some degree of p-hacking is going on. This is also often described as torturing the data until it confesses

2015-02-27: Double-blind peer review

Nature has decided to add an option for double-blind peer review – papers would be sent to the referees without author names or institutional affiliation on them. papers from Big Names won’t bother, because they have more to lose by being covered up. So the double-blinded papers might end up disproportionately from smaller groups who are trying to even the playing field, and it risks becoming a negative signal. It might be better if Nature were to take the plunge and blind everything.

2015-05-08: Or you could have saved yourself 250k and gotten a useful education right from the start.

In a Boston basement that houses a new kind of vocational training school, Katy Feng is working harder than she ever did at Dartmouth College. The 22-year-old graduated last year with a bachelor’s degree in psychology and studio art that cost more than $250K. She sent out 10s of résumés looking for a full-time job in graphic design but wound up working a contract gig for a Boston clothing store. “I thought, they’ll see Dartmouth, and they’ll hire me. That’s not really how it works, I found.” She figures programming is the best way to get the job she wants. Hence the basement, where she’s paying $11500 for a 3-month crash course in coding.

2015-06-06: Stop bailing out useless degrees:

Actually, no, let’s not do that, and just let people hold basic jobs even if they don’t cough up $100K from somewhere to get a degree in Medieval History? Sanders’s plan would subsidize the continuation of a useless tradition that has turned into a speculation bubble, prevent the bubble from ever popping, and disincentivize people from figuring out a way to route around the problem

2015-06-08: Scientific publishing is stuck in the 18th century.

to a great extent the internet is used as a PDF delivery device by many publishers, and the PDF is an electronic form of the classic paper journal article, whose basic outlines were established in the 17th and 18th centuries. In other words, in a qualitative sense we’re not that much beyond the Age of Newton and the heyday of the Royal Society. Scientific publishing today is analogous to “steampunk.” An anachronistic mix of elements somehow persisting deep into the 21st century. Its real purpose is to turn the norms of the past into cold hard cash for large corporations.

2015-07-02: Wikipedia + Open Access = Revolution

Our research suggests that open access policies have a tremendous impact on the diffusion of science to the broader general public through an intermediary like Wikipedia

2016-05-16: This academic title generator is a clever take on the increasing deadweight at universities, all the “administrators”.

Principal Deputy Manager of the Subcommittee for Athletic Communications
Assistant Vice Dean of the Committee on Neighborhood Compliance
Lead Associate Provost for the Office of Dining Compliance

2016-07-17: This Open Science Framework is excellent, starts peer review during the experiment design phase when it’s still cheap to make corrections, rather than when it’s all done.
2017-02-01: Scientific fraud is rampant

Hartgerink is 1 of only a handful of researchers in the world who work full-time on the problem of scientific fraud – and he is perfectly happy to upset his peers. “The scientific system as we know it is pretty screwed up. I’ve known for years that I want to help improve it. Statcheck is a good example of what is now possible”. The top priority is something much more grave than correcting simple statistical miscalculations. He is now proposing to deploy a similar program that will uncover fake or manipulated results – which he believes are far more prevalent than most scientists would like to admit.

2017-04-15: I wonder how many people ended up graduating with their head trauma degrees?

It was a bold new idea: an all-sports college, classes be damned. But for the athletes at Forest Trail Sports University who faced hunger, sickness and worse, it turned into a nightmare.

2018-08-22: Replication revolution

Scientific revolutions occur on all scales, but here let’s talk about some of the biggies:

  • 1850-1950: Darwinian revolution in biology, changed how we think about human life and its place in the world.
  • 1890-1930: Relativity and quantum revolutions in physics, changed how we think about the universe.
  • 2000-2020: Replication revolution in experimental science, changed our understanding of how we learn about the world.

We are in the middle of a scientific revolution involving statistics and replication in many areas of science, moving from an old paradigm in which important discoveries are a regular, expected product of statistically-significant p-values obtained from routine data collection and analysis, to a new paradigm of . . . weeelll, I’m not quite sure what the new paradigm is.

2019-07-31: Progress studies

Progress itself is understudied. By “progress,” we mean the combination of economic, technological, scientific, cultural, and organizational advancement that has transformed our lives and raised standards of living over the past couple of centuries. For a number of reasons, there is no broad-based intellectual movement focused on understanding the dynamics of progress, or targeting the deeper goal of speeding it up. We believe that it deserves a dedicated field of study. We suggest inaugurating the discipline of “Progress Studies.”

2020-02-04: COVID-19 papers are released very quickly, and data is openly shared. this is how science is supposed to work.

But the scientific journal as we know it was actually born because of popular demand for information during a pandemic.

In the early 1820s, a smallpox outbreak struck Paris and other French cities. A new vaccine was in existence at the time, but reports varied about how effective it was. A powerful medical institution in Paris, the Académie de Médecine, gathered its members to discuss what advice it should issue to the nation. Historically, such meetings were held privately, but the French Revolution had ushered in a new era of government accountability, and journalists were allowed to attend. The scientific debate they relayed upset some members of the Académie, which had hoped to make a clear, unified statement. In response, the Académie sought to regain control of its message by publishing its own weekly accounts of its discussions, which evolved into the academic journals we know today.

2020-05-01: Another take:

The massive effort to develop vaccines for COVID-19 will boost and accelerate the development of cancer vaccines. Streamlined regulatory approvals will speed up the approval of all new medical treatments. Elon Musk was successfully executing a transformation to fully reusable rockets, mass-produced satellites, electric cars, electric trucks, and self-driving vehicles. Elon Musk will continue to execute and win.

2020-06-25: Another example:

In just 3 months, 1 British research team identified the first life-saving drug of the pandemic (and helped cancel hydroxychloroquine). The Recovery trial has an adaptive design, built to evaluate 6 different drugs at once, with methods and goals announced in advance.

2021-04-09: Science funding

Science funding mechanisms are too slow in normal times and may be much too slow during the COVID-19 pandemic. Fast Grants are an effort to correct this. If you are a scientist at an academic institution currently working on a COVID-19 related project and in need of funding, we invite you to apply for a Fast Grant. Fast Grants are $10k to $500k and decisions are made in under 14 days. If we approve the grant, you’ll receive payment as quickly as your university can receive it.

2021-06-15: They were a big success, here’s what they learned:

The first round of grants were given out within 48 hours. Later rounds of grants, which often required additional scrutiny of earlier results, were given out within 2 weeks. These timelines were much shorter than the alternative sources of funding available to most scientists. Grant recipients were required to do little more than publish open access preprints and provide monthly 1-paragraph updates. We allowed research teams to repurpose funds in any plausible manner, as long as they were used for research related to COVID-19. Besides the 20 reviewers, from whom perhaps 20-40 hours each was required, the total Fast Grants staff consisted of 4 part-time individuals, each of whom spent a few hours per week on the project after the initial setup.
We found it interesting that relatively few organizations contributed to Fast Grants. The project seemed a bit weird and individuals seemed much more willing to take the “risk”. We were very positively surprised at the quality of the applications. We didn’t expect people at top universities to struggle so much with funding during the pandemic. 32% said that Fast Grants accelerated their work by “a few months”. 64% of respondents told us that the work in question wouldn’t have happened without receiving a Fast Grant. We were disappointed that very few trials actually happened. This was typically because of delays from university institutional review boards (IRBs) and similar internal administrative bodies that were consistently slow to approve trials even during the pandemic. In our survey of the scientists who received Fast Grants, 78% said that they would change their research program “a lot” if their existing funding could be spent in an unconstrained fashion. We find this number to be far too high: the current grant funding apparatus does not allow some of the best scientists in the world to pursue the research agendas that they themselves think are best. Scientists are in the paradoxical position of being deemed the very best people to fund in order to make important discoveries but not so trustworthy that they should be able to decide what work would actually make the most sense!

2021-08-07: Designing a better University

Another effect of ASU’s pragmatic research culture is reducing overspecialization among academic disciplines. Crow and Dabar recognize that “specialization has been the key to scientific success” and that disciplines have historic value, but they worry that “such specialization simultaneously takes us away from any knowledge of the whole,” leaving us ill-prepared for the future. Disciplines make it harder to synthesize information, and if a university wants to remain a flexible knowledge enterprise, they need to be prepared to take a holistic approach. During Crow’s tenure, ASU consolidated quite a few academic departments, such as history and political science, or English and modern languages, in order to encourage these fields to solve real problems together – which also had the added benefit of reducing administrative costs. Instead of having to apply to ASU first, then, a student can start by doing the coursework, then enroll when they’re ready, with part of their degree already completed. This feels well-aligned to me with tech’s focus on prioritizing output over credentials. It’s also worth noting that because this experiment lives in Learning Enterprise, it doesn’t detract from the more traditional degree work that lives under Academic Enterprise. ASU also seems to share a lot of values that I cherish about tech, such as optimism, entrepreneurialism, and responsiveness, as well as being results-oriented, and it appears to be a culture that’s enforced from the top. While ASU might not run as fast as a startup, their culture of testing, prototyping, and reinvention seems rare for such a large institution.

2021-11-18: A fascinating look at all the research around Ivermetcin, and what lessons to draw from it about the state of science

This is one of the most carefully-pored-over scientific issues of our time. 10s of teams published studies saying ivermectin definitely worked. Then most scientists concluded it didn’t. What a great opportunity to exercise our study-analyzing muscles! To learn stuff about how science works which we can then apply to less well-traveled terrain! If the lesson of the original replication crisis was “read the methodology” and “read the preregistration document”, this year’s lesson is “read the raw data”. Which is a bit more of an ask. Especially since most studies don’t make it available.

2021-12-08: Literature search doesn’t work

I worked on biomedical literature search, discovery and recommender web applications for many months and concluded that extracting, structuring or synthesizing “insights” from academic publications (papers) or building knowledge bases from a domain corpus of literature has negligible value in industry. Close to nothing of what makes science actually work is published as text on the web. Research questions that can be answered logically through just reading papers and connecting the dots don’t require a biotech corp to be formed around them. There’s much less logic and deduction happening than you’d expect in a scientific discipline.

2022-01-30: New types of science organizations

Now founders and investors—including tech CEOs, crypto billionaires, bloggers, economists, celebrities, and scientists—are coming together to address stasis with experimentation. They’re building a fleet of new scientific labs to speed progress in understanding complex disease, extending healthy lifespans, and uncovering nature’s secrets in long-ignored organisms. In the process, they’re making research funding one of the hottest spaces in Silicon Valley.

Arc Institute

Problem: U.S. science funding attaches too many strings to our best researchers, preventing them from working on the most interesting problems.

Solution: Arc gives scientists no-strings-attached, multiyear funding so that they don’t have to apply for external grants.

Arcadia Science

Problem: Modern science is too siloed—both because researchers are too narrowly focused and because peer-reviewed journals stymie collaboration.

Solution: Expand the menu of species that we deeply research—and embrace an open-science policy.

New Science

Problem: Science is getting old, fast.

Solution: New Science sponsors young scientists.

Bringing all method advances together:

Altogether, these examples are largely restricted to specific disciplines: while research in genomics has pioneered the use of massive open databases, it rarely contains robustness checks or the pre-registration of methods. While the methods of clinical trials are required to be pre-registered, their analysis code is rarely shared publicly.

We believe that good practices from individual fields should serve as models for how science ought to work across the board, and that the scientific process should be radically reassembled from start to finish. How would it work?

To begin with, scientists would spend far more time clarifying the theories they are studying – developing appropriate measures to record data, and testing the assumptions of their research – as the meta-scientist Anne Scheel and others have suggested. Scientists would use programs such as DeclareDesign to simulate data, and test and refine their methods.

Instead of writing research in the form of static documents, scientists would record their work in the form of interactive online documents (such as in Markdown format). Past versions of their writing and analysis would be fully available to view through platforms such as Git and OSF. Robustness checks and multiverse analysis would be the norm, showing readers the impact of various methodological decisions interactively.

Once research is freed from the need to exist in static form, it can be treated as if it were a live software product. Analysis code would be standardized in format and regularly tested by code-checkers, and data would be stored in formats that were machine-readable, which would enable others to quickly replicate research or apply methods to other contexts. It would also be used to apply new methods to old data with ease.

Some types of results would be stored in mass public databases with entries that would be updated if needed, and other researchers would reuse their results in further analysis. Citations would be viewer-friendly, appearing as pop-ups that highlight passages or refer to specific figures or datasets in prior research (each with their own doi codes), and these citations would be automatically checked for corrections and retractions.

Peer review would operate openly, where the wider scientific community and professional reviewers would comment on working papers, Red Teams would be contracted to challenge research, and comments and criticisms of studies would be displayed alongside them on platforms such as PubPeer. Journals would largely be limited to aggregating papers and disseminating them in different formats (for researchers, laypeople and policymakers); they would act, perhaps, in parallel with organizers of conferences. They could perform essential functions such as code-checking and copy-editing, working through platforms such as GitHub.

2022-02-07: Why Isn’t There a Replication Crisis in Math?

There’s a lot to say about the mathematics we use in social science research, especially statistically, and how bad math feeds the replication crisis.1 But I want to approach it from a different angle. Why doesn’t the field of mathematics have a replication crisis? And what does that tell us about other fields, that do? 1 of the distinctive things about math is that our papers aren’t just records of experiments we did elsewhere. In experimental sciences, the experiment is the “real work” and the paper is just a description of it. But in math, the paper, itself, is the “real work”. Our papers don’t describe everything we do, of course. But the paper contains a (hopefully) complete version of the argument that we’ve constructed. And that means that you can replicate a math paper by reading it. Mathematicians have pretty good idea of what results should be true; but so do psychologists! Mathematicians sometimes make mistakes, but since they’re mostly trying to prove true things, it all works out okay. Social scientists are also (generally) trying to prove true things, but it doesn’t work out nearly so well. Why not? In math, a result that’s too good looks just as troubling as one that isn’t good enough. If a study suggest humans aren’t capable of making reasoned decisions at 11:30, it’s confounded by something, even if we don’t know what.

2022-03-06: There’s much less of a replication crisis in Biology:

In biology, when one research team publishes something useful, then other labs want to use it too. Important work in biology gets replicated all the time—not because people want to prove it’s right, not because people want to shoot it down, not as part of a “replication study,” but just because they want to use the method. So if there’s something that everybody’s talking about, and it doesn’t replicate, word will get out.

2022-07-18: New funding ideas

To avoid rent dissipation and risk aversion, our state funding of science should be simplified and decentralized into Researcher Guided Funding. Researcher Guided Funding would take the ~$120b spent by the federal government on science each year and distribute it equally to the 250k full-time research and teaching faculty in STEM fields at high research activity universitieswho already get 90% of this money. This amounts to about $500k for each researcher every year. You could increase the amount allocated to some researchers while still avoiding dissipating resources on applications by allocating larger grants in a lottery that only some of them win each year. 60% of this money can be spent pursuing any project they want, with no requirements for peer consensus or approval. With no strings attached, Katalin Karikó and Charles Townes could use these funds to pursue their world-changing ideas despite doubt and disapproval from their colleagues. The other 40% would have to be spent funding projects of their peers. This allows important projects to gain a lot of extra funding if a group of researchers are excited about it. With over 5000 authors on the paper chronicling the discovery of the Higgs Boson particle in the Hadron Supercollider, this group of physicists could muster $2.5b a year in funding without consulting any outside sources. This system would avoid the negative effects of long and expensive review processes, because the state hands out the money with very few strings, and risk aversion among funders, because the researchers individually get to decide what to fund and pursue.

2022-10-20: Maybe not all hope is lost. It seems there’s improvement in preregistering studies, more data sharing etc.

There are encouraging signs that pre-registered study designs like this are helping address the methodological problems described above. Consider the following five graphs. The graphs show the results from 5 major studies, each of which attempted to replicate many experiments from the social sciences literature. Filled in circles indicate the replication found a statistically significant result, in the same direction as the original study. Open circles indicate this criterion wasn’t met. Circles above the line indicate the replication effect was larger than the original effect size, while circles below the line indicate the effect size was smaller. A high degree of replicability would mean many experiments with filled circles, clustered fairly close to the line. Here’s what these 5 replication studies actually found:

As you can see, the first 4 replication studies show many replications with questionable results – large changes in effect size, or a failure to meet statistical significance. This suggests a need for further investigation, and possibly that the initial result was faulty. The fifth study is different, with statistical significance replicating in all cases, and much smaller changes in effect sizes. This is a 2020 study by John Protzko et al that aims to be a “best practices” study. By this, they mean the original studies were done using pre-registered study design, as well as: large samples, and open sharing of code, data and other methodological materials, making experiments and analysis easier to replicate…In short, the replications in the fifth graph are based on studies using much higher evidentiary standards than had previously been the norm in psychology. Of course, the results don’t show that the effects are real. But they’re extremely encouraging, and suggest the spread of ideas like Registered Reports contribute to substantial progress.

There’s also some interesting ideas about funding:

Fund-by-variance: Instead of funding grants that get the highest average score from reviewers, a funder should use the variance (or kurtosis or some similar measurement of disagreement) in reviewer scores as a primary signal: only fund things that are highly polarizing (some people love it, some people hate it). One thesis to support such a program is that you may prefer to fund projects with a modest chance of outlier success over projects with a high chance of modest success. An alternate thesis is that you should aspire to fund things only you would fund, and so should look for signal to that end: projects everyone agrees are good will certainly get funded elsewhere. And if you merely fund what everyone else is funding, then you have little marginal impact

2023-02-23: Registered reports for publishing negative results

The fundamental principle underpinning a Registered Report is that a journal commits to publishing a paper if the research question and the methodology chosen to address it pass peer review, with the result itself taking a back seat. For now, Nature is offering Registered Reports in the field of cognitive neuroscience and in the behavioral and social sciences. In the future, we plan to extend this to other fields, as well as to other types of study, such as more exploratory research.

Why are we introducing this format? In part to try to address publication bias, the tendency of the research system — editors, reviewers and authors — to favor the publication of positive over negative results. Registered Reports help to incentivize research regardless of the result. An elegant and robust study should be appreciated as much for its methodology as for its results.
More than 300 journals already offer this format, up from around 200 in 2019. But despite having been around for a while, Registered Reports are still not widely known — or widely understood — among researchers. This must change. And, at Nature, we want to play a part in changing it.

2023-08-31: History and other disciplines are even worse than Psychology

Science’s replication crises might pale in comparison to what happens all the time in history, which is not just a replication crisis but a reproducibility crisis. Replication is when you can repeat an experiment with new data or new materials and get the same result. Reproducibility is when you use exactly the same evidence as another person and still get the same result — so it has a much, much lower bar for success, which is what makes the lack of it in history all the more worrying.

2024-04-12: Mandating open access as a condition of funding

The Bill & Melinda Gates Foundation, one of the world’s top biomedical research funders, will from next year require grant holders to make their research publicly available as preprints. The foundation also said it would stop paying for article-processing charges (APCs) — fees imposed by some journal publishers to make scientific articles freely available online for all readers, a system known as open access (OA).