Tag: atom

Radioactivity & deep time

Radium forever transformed attitudes to time and where we may be within history – creating the first efflorescence of truly long-term thinking. Until that point, we knew the Earth was old, but hadn’t fully embraced how many more millions – or even billions – of years could lie ahead for humanity and the planet. In Europe, Christians assumed they were much closer to time’s end than its beginning. Judgement Day was anticipated soon. Then, the 1900s dawned, and radioactivity was discovered. This changed everything. From thinking they lived near history’s end, people now recognized they could be living during its very beginning. Humanity’s universe, no longer decrepit, now seemed positively youthful.

Nuclear space propulsion

Project Orion: Atom bombs as propellants. Those 50s guys had balls.

2007-05-04: Project Pluto

SLAM’s simple but revolutionary design called for the use of nuclear ramjet power, which would give the missile virtually unlimited range. Air forced into a duct as the missile flew would be heated by the reactor, causing it to expand, and exhaust out the back, providing thrust. Pluto’s namesake was Roman mythology’s ruler of the underworld — seemingly an apt inspiration for a locomotive-size missile that would travel at near-treetop level at 3x the speed of sound, tossing out hydrogen bombs as it roared overhead. Pluto’s designers calculated that its shock wave alone might kill people on the ground. Then there was the problem of fallout. In addition to gamma and neutron radiation from the unshielded reactor, Pluto’s nuclear ramjet would spew fission fragments out in its exhaust as it flew by


2014-11-23: The reason the Philae lander died after 60h is because the ESA couldn’t fit it with a nuclear battery, too much paranoia in Europe.
2017-12-04: A 10kw nuclear reactor for space exploration from nasa. bravo, especially considering the silliness of esa restrictions on nuclear propulsion in space.

2019-12-04: Pulsed Fission Fusion

Pulsed Fission-Fusion should be able to achieve 15 kW/kg and 30K seconds of ISP. This will be orders of magnitude improvement over competing systems such as nuclear electric, solar electric, and nuclear thermal propulsion that suffer from lower available power and inefficient thermodynamic cycles.

2022-01-30: How serious is NASA about nuclear?

Today’s push for nuclear power in space is a useful metric for measuring the seriousness of NASA’s—and the nation’s—lunar and Martian ambitions. In the context of human spaceflight, NASA has a well-known aversion to “new” (and thus presumably more risky) technology—but in this case, the “old” way makes an already perilous human endeavor needlessly difficult. For all the challenges of embracing nuclear power for pushing the horizon outward for humans in space, it is hard to make the case that tried-and-true chemical propulsion is easier or carries significantly less physical—and political—risk. Launching 10 International Space Stations’ worth of mass across 27 superheavy rocket launches for fuel alone for a single Mars mission would be a difficult pace for NASA to sustain. (That is more than 40 launches and at least $80b if the agency relies on the SLS.) And such a scenario assumes everything goes perfectly: sending help to a troubled crew on or around Mars would require 10s of additional fuel launches, and chemical propulsion allows very limited windows of opportunity for the liftoff of any rescue mission.

If, with a single technology, that alarmingly high number of ludicrously expensive launches could be cut down to 3—while also offering more chances to travel to Mars and back—how could a space agency that was earnest in its ambitions not pursue that approach? No miracles are necessary, and regulators and appropriators seem to agree that the time has come.

We can fly to Mars. Splitting atoms, it seems, is now the safest way to make that happen.

Nuclear energy

Watching the mummy returns reminded me of an article i had read some time ago, arguably one of the scariest i ever read. it talks about the problem of marking a site as dangerous for 10 ka into the future.

These standing stones mark an area used to bury radioactive wastes. The area is … by … kilometers and the buried waste is … kilometers down. This place was chosen to put this dangerous material far away from people. The rock and water in this area may not look, feel, or smell unusual but may be poisoned by radioactive wastes. When radioactive matter decays, it gives off invisible energy that can destroy or damage people, animals, and plants.
Do not drill here. Do not dig here. Do not do anything that will change the rocks or water in the area.
Do not destroy this marker. This marking system has been designed to last 10 ka. If the marker is difficult to read, add new markers in longer-lasting materials in languages that you speak. For more information go to the building further inside. The site was known as the WIPP (Waste Isolation Pilot Plant) site when it was closed in …

2006-10-16: Well-researched Thorium piece, but Michael needs to become more concise: he repeats himself too much in this piece.

Sometime between 2020 and 2030, we will invent a practically unlimited energy source that will solve the global energy crisis. This unlimited source of energy will come from thorium. A summary of the benefits, from a recent announcement of the start of construction for a new prototype reactor:

  • There is no danger of a melt-down like the Chernobyl reactor.
  • It produces minimal radioactive waste.
  • It can burn plutonium waste from traditional nuclear reactors.
  • It is not suitable for the production of weapon grade materials.
  • Global thorium reserves could cover our energy needs for 1000s of years.

2007-10-01: Using beta decay for batteries. Now being rehashed as the new hotness.
2008-01-09: Micro Nuclear Reactor

The new reactor, which is only 7m x 2m, could change everything for a group of neighbors who are fed up with the power companies and want more control over their energy needs.

2008-05-22: Why bother with oil-based stuff when you can have distributed nuclear energy with Uranium hydride batteries?
2008-07-24: Uranium Deep Burn

It is projected that volumes of high-level waste could be reduced by a factor of 50, while extra electricity is generated.

2008-12-01: Thorium

Besides the low amount of waste and almost complete burning of all Uranium and Plutonium, another big advantage of liquid fluoride reactors is fast and safe shutoff and restart capability. This fast stop and restart allows for load following electricity generation. This means a different electric utility niche can be addressed other than just baseload power for nuclear power. Currently natural gas is the primary load following power source. Wind and solar are intermittent in that they generate power at unreliable times. LFTR would be reliable on demand power.

Fuck ethanol. Lets have some 21st century nuclear power

Thorium is one of the victims of the brainless scare campaign against nuclear that has infected most western nations over the last 30 years. Instead of doing silly stunts like the germans, whose “exit” from nuclear energy will mean more coal plants being built, an enlightened nation would chose thorium.

Instead, we are stuck with aging reactors (how does that make anyone safer?) and scientific illiteracy both in the general population and elected representatives.

I’m generally dismayed how little discussion about thorium there is in energy circles.

Kirk Sorensen provides an update on the current state of thorium power. The bad news is that it still remains mostly theoretical concept; no operational reactor has been deployed yet — even as a prototype. However, new thorium nuclear molten salt experiments were just started in Europe. We have good “line of sight” on the science to build one — so, at this point, the limiting factor is mostly funding. In a world of privately-funded space travel, such a gating obstacle shouldn’t remain for long. 4 specific difficulties have been mentioned:

  • Salts can be corrosive to materials.
  • Designing for high-temperature operation is more difficult
  • There has been little innovation in the field for several decades
  • The differences between LFTRs and the light water reactors in majority use today are vast; the former “is not yet fully understood by regulatory agencies and officials.”

Andrew Yang has proposed a nuclear subsidy—$50B over 5 years

2008-12-09: Steven Chu Energy Secretary

he is pro-nuclear and has a deep understanding of all the technical issues around energy. Real change from the Bush administration in selecting extreme competence. It is not in any way a guarantee of correct energy choices because there is still political reality.

2014-02-04: The Linear No-Threshold (LNT) Radiation Dose Hypothesis, which surreally influences every regulation and public fear about nuclear power, is based on no knowledge whatever.

At stake is the 100s of billions spent on meaningless levels of “safety” around nuclear power plants and waste storage, the projected costs of next-generation nuclear plant designs to reduce greenhouse gases worldwide, and the extremely harmful episodes of public panic that accompany rare radiation-release events like Fukushima and Chernobyl. (No birth defects whatever were caused by Chernobyl, but fear of them led to 100K panic abortions in the Soviet Union and Europe. What people remember about Fukushima is that nuclear opponents predicted that 100s or 1000s would die or become ill from the radiation. In fact nobody died, nobody became ill, and nobody is expected to.)
2014-02-14: You can power the world for 72 years with the nuclear waste that exists today, at a price cheaper than coal. Of course it will likely not happen due to collusion between the coal industry and the fear industrial complex.

2015-03-18: China nuclear

China approved 2 reactors this month as it vowed to cut coal use to meet terms of a CO2-emissions agreement reached in November between President Xi Jinping and US counterpart Barack Obama. About $370b will be spent on atomic power. Plans to 3x nuclear capacity by 2020 to as much as 58 gigawatts.

2015-06-15: Amazing energy densities

Assuming a 25% conversion efficiency, a Radioisotope Power Source (RPS) would have 400K MJ / kg (electric) compared to 0.72 MJ / kg for Li-ion batteries. The goal is make a 5 watt “D cell” but with nuclear power that lasts decades

2016-05-16: TerraPower

Bill Gates is funding Nathan Myhrvold’s Terrapower, a fast breeder reactor that burns a U238 duraflame log for 60 years, with 99% efficiency vs 1% for today’s U235 reactors. No fuel to reload or waste to ship around. Existing nuclear waste could be used as fuel.

2016-11-14: Molten Salt Fission

“It is the first time a comprehensive IAEA international meeting on molten salt reactors has ever taken place. Given the interest of Member States, the IAEA could provide a platform for international cooperation and information exchange on the development of these advanced nuclear systems.” Molten salt reactors operate at higher temperatures, making them more efficient in generating electricity. In addition, their low operating pressure can reduce the risk of coolant loss, which could otherwise result in an accident. Molten salt reactors can run on various types of nuclear fuel and use different fuel cycles. This conserves fuel resources and reduces the volume, radiotoxicity and lifetime of high-level radioactive waste.

2016-11-28: Making nuclear energy radically less expensive

“The big thing is that the government is making national lab resources available to private companies in a way that it wasn’t before. If you are a nuclear startup, you can only go so far before you need to do testing, and you are not going to build a nuclear test facility, because that is hard and expensive. But now you could partner with a national lab to use their experimental resources. I’ve been talking about how to set up a pathway from universities for this kind of research.”

2016-12-01: Coal to nuclear can rapidly address 30% of CO2

The high temperature reactors can replace the coal burners at 100s supercritical coal plants in China. The lead of the pebble bed project indicates that China plans to replace coal burners with high temperature nuclear pebble bed reactors.

2017-02-22: 1m tons of nuclear fuel

The amount of used nuclear fuel will continue to increase, reaching around 1M tons by 2050. The uranium and plutonium that could be extracted from that used fuel would be sufficient to provide fuel for at least 140 light water reactors of 1 GW capacity for 60 years. “It makes sense to consider how to turn today’s burden into a valuable resource.”

2017-08-16: How it is going with China nuclear

The overall cost of this first of a kind nuclear plant will be in the neighborhood of $5K/kw of capacity. That number is based on signed and mostly executed contracts, not early estimates. It is 2x the initially expected cost. 35% of the increased cost could be attributed to higher material and component costs that initially budgeted, 31% of the increase was due to increases in labor costs and the remainder due to the increased costs associated with the project delays.

Zhang Zuoyi described the techniques that will be applied to lower the costs; he expects them to soon approach the $2k / kw capacity range. If this can be achieved then the 210 MW reactor would be $525m. A 630 MW reactor would be $1.5b. It could be less if the 600 MW reactor only had to have the thermal unit and could use the turbine and other parts of an existing coal plant.

2018-11-09: Towards approval

Terrestrial Energy is leading the way to getting regulatory approvals for its molten salt
fission reactor design. Terrestrial Energy aims to build the first walkaway safe molten salt modular reactor design in the late 2020s. IMSR generates 190 MW electric energy with a thermal-spectrum, graphite-moderated, molten-fluoride-salt reactor system. It uses standard-assay low-enriched uranium (less than 5% 235U) fuel.

2019-06-24: Nuclear Waste Storage

Deep in the bedrock of Olkiluoto Island in southwest Finland a tomb is under construction. The tomb is intended to outlast not only the people who designed it, but also the species that designed it. It is intended to maintain its integrity without future maintenance for 100 ka, able to endure a future ice age. 100 ka ago 3 major river systems flowed across the Sahara. 100 ka ago anatomically modern humans were beginning their journey out of Africa. The oldest pyramid is around 4.6 ka old; the oldest surviving church building is fewer than 2 ka old.

This Finnish tomb has some of the most secure containment protocols ever devised: more secure than the crypts of the Pharaohs, more secure than any supermax prison. It is hoped that what is placed within this tomb will never leave it by means of any agency other than the geological.

The tomb is an experiment in post-human architecture, and its name is Onkalo, which in Finnish means “cave” or “hiding place.” What is to be hidden in Onkalo is high-level nuclear waste, perhaps the darkest matter humans have ever made.

2020-05-20: 3D-Printed Nuclear Reactor

The reams of data generated by 3D-printing parts can speed up the certification process and lower the cost of getting a nuclear reactor online.

2021-04-20: Nuclear power failed. We need to deeply understand these reasons, because there won’t be a energy transition without new nuclear.

To avoid global warming, the world needs to massively reduce CO2 emissions. But to end poverty, the world needs massive amounts of energy. In developing economies, every kWh of energy consumed is worth $5 of GDP.

How much energy do we need? Just to give everyone in the world the per-capita energy consumption of Europe (which is only half that of the US), we would need to more than triple world energy production, increasing our current 2.3 TW by over 5 additional TW:
If we account for population growth, and for the decarbonization of the entire economy (building heating, industrial processes, electric vehicles, synthetic fuels, etc.), we need more like 25 TW. The proximal cause of nuclear‘s flop is that it is expensive. In most places, it can’t compete with fossil fuels. Natural gas can provide electricity at 7–8 cents/kWh; coal at 5 c/kWh.Why is nuclear expensive? I’m a little fuzzy on the economic model, but the answer seems to be that it‘s in design and construction costs for the plants themselves. If you can build a nuclear plant for around $2.50/W, you can sell electricity cheaply, at 3.5–4 c/kWh. But costs in the US are around 2–3x that. (Or they were—costs are so high now that we don’t even build plants anymore.)

2022-09-14: Simple reactor designs that can be iterated quickly may be the future

Much of the future lies with KRUSTY-like kilowatt-scale systems. Nuclear has a power density problem that keeps it from powering our cars and planes. The shielding and heat engines are too heavy. The radiation and particles are harmful because they contain a lot of energy. The answer is to make solid-state technologies that convert heat and radiation into electricity. It is theoretically possible to turn gamma rays into electricity with something similar to a solar cell. Shielding gets lighter and generates electricity! It also brings new life to many isotopes that require too much shielding to be practical in radioisotope generators. In the meantime, kilowatt-scale systems can compete in smaller remote power applications and supplement solar microgrids. Further cost decreases could enable electricity customers to defect from the grid where solar is not feasible. Competing manufacturers promise a much more competitive industry than exists today, where incentives rarely encourage falling prices.

The endgame is a chunk of nuclear material that can regulate itself based on user demand, surrounded by energy-capturing devices that soak up every bit of emitted energy. Power density could exceed today’s liquid fuels and batteries while having extreme energy density. We’d finally get our flying cars! Reactors that look like KRUSTY are on the path to that endgame.

2023-03-25: Nuclear has some near-fatal problems that make it a non-starter on earth. Beyond the well-known overregulation, the biggest problem is that nuclear produces relatively low temperature heat that then has to be converted to electricity, which is very inefficient. A process would have to be found to turn radiation and heat directly into electricity, without the steam turbines.
2023-07-13: How we got the current regulatory regime

In a world where industry and activists fought to a standstill, Probabilistic Risk Assessment provided the only credible guiding light. Rasmussen and team first began to compile and model relevant data in the early 1970s. Over the decades the industry’s database grew, and the NRC developed an opinion on every valve, every pipe, the position of every flashing light in a plant. This angered the utilities, who could not move a button on a control panel without reams of test data and its associated paperwork. This angered activists when the refinement of models predicted safety margins could be relaxed.

But Probabilistic Risk Assessment has no emotions. Probabilistic Risk Assessment estimated, validated, learned. Probabilistic Risk Assessment would form the barrier protecting us from catastrophe.

Atom

I support the Log Format Roadmap because it has a fighting chance to become the first practical step to achieve CMS content interop. Blogs will drive adoption of the principles stated in against the grain. As a weblog vendor, I support it because it will drive the adoption of better tools, and will increase the market for everyone.
2003-06-27: Sam Ruby has been spearheading a major standardization effort in the blog world recently, and he has this to say about his motivations:

About a month ago, my interest and activity in this space kicked into high gear. I started attending weblogging conferences.

Far from claiming to have been the inspiration, it is still very nice to think that OSCOM was able to contribute to the drive towards standardization. This is the stuff we are talking about.
2004-06-08: So that is what Greg Stein has been up to. The sprint was much fun, as were the drinks.

Ever since Atom first popped up, I’ve been interested in it, and even attempted to join a small sprint/discussion at Seybold last year to talk about WebDAV. The bomb threat shut that down, but we simply moved locations for drinks rather than hacking 🙂 So while I’ve been tracking it generally, my specific current interest is through my work at Google. I’m the engineering manager for the Blogger group, so I’ve gotta pay some attention to what we’re signing up for 🙂

2005-09-06: All feeds for this blog now serve Atom 1.0. It will be interesting to watch if anyone notices / cares. Longer term, /atom.xml is the canonical url if you want to subscribe.
2006-10-18: RSS / ATOM / OPML schematron is much easier to work with than the mysterious feed validator code. Plus it works for really huge feeds. This has the README for the RSS validator. Pretty out of date, but a good starting point. For one, you need the latest schematron from Rick Jellife, not the old one on this site.
2006-11-21: Gdata JSON. They also do jsonp, and reuse the Atom serialization.
2006-12-01: GData for Google Spreadsheets. The data web circle gets more complete. This is (one) counterpart to the web formulas in Google Sheets. Now as to how GData can play in the semweb space. Maybe via Queso.
2007-01-31: Tim wonders how to use Atom categories properly. Link to the wikipedia url of the tag I’d say.
2007-02-14: If you browse to a page with an RSS or Atom feed, you get the option to immediately add that feed to Google Reader for mobile via Mobile Proxy Feed Discovery.
2007-03-31: Some Atom extensions by nature to encourage text mining. I don’t know.. They do not seem to reuse core Atom in their examples. Plus I am not sure how useful a word count really is.
2007-05-16: GdataServer

Generally speaking, the Lucene GData Server is an extensible syndication format server providing CRUD actions to alter feed content, authentication, optimistic concurrency and full text search based on Apache Lucene.

2007-05-25: APP frontend to LDAP. This might enable some interesting scenarios.
2007-06-01: Opensearch / Atom interface for Swiss whitepages. Nice!

Wir haben für unser Telefonbuch eine Schnittstelle entwickelt, welche es erlaubt unsere Telefondaten in anderen Applikationen oder Websites zu integrieren. Die Schnittstelle basiert auf dem Konzept von REST. Die Resultate werden als Atom-Feed geliefert, welcher mit OpenSearch- und tel.search.ch-spezifische Felder ergänzt ist. Mit Hilfe eines Schlüssels werden die Resultate auch strukturiert zurückgeliefert. Die Resultatzahl ist pro Abfrage auf 200 Einträge beschränkt.

2007-06-08: GData Fails as a Protocol

Gregor Rothfuss wondered whether I couldn’t influence people at Microsoft to also standardize on GData. The fact is that I’ve actually tried to do this with different teams on multiple occasions and each time the I’ve tried, certain limitations..

2007-06-10: Oy. And all this because I asked Dare why Microsoft doesn’t use APP.

There was quite a flurry of blogging about the Atom Publishing Protocol (APP) over the weekend, all kicked off by Dare Obasanjo’s criticisms of the protocol. Some of the posts were critical of Dare and his motives, but I’m thankful he started the conversation.

2007-07-26: The chorus for putting more REST into GIS / mapping gets louder, yay

The only thing needed to bring together this messy new world Atlas, is a global agreement about the structure of the data used to annotate the maps, as well as agreement on the format for retrieving such.

2007-07-28: WFS simple was hijacked, as usual, by people who don’t understand why worse is better. This is why I am not in the least interested in WFS and am betting on APP instead.

if the geospatial standards community continues on this path of isolating itself, of looking upstream to the ISO rather than downstream to the distributed neogeo developer community, it will miss out on being connected to amazing things.

Here’s a Feature Demo of a RESTful WFS-T with a call for GE to support posting of features. I would go further and ask for APP support.
Version control for Collaborative Mapping. Calls for diffs and patches. Might be built on top of an APP infrastructure, imho

The next major area of tool improvement I see is expanding the wiki notion of editing to more of a merging revision control model, with branches, versions, patches and eventually expanding in to distributed repositories. The ‘patch‘ is a small piece of code that can be applied to a computer program to fix something. They are widely used in the open source software world, both to get the latest improvements, and to allow those who have commit rights to a source repository to review outside improvements before putting them in. This helps create the meritocracy around projects, as they don’t let just anyone in to the repository as they might break the build. Such a case is less likely with maps, but sometimes core contributors might want to see a couple sample patches before letting a new member in. In the GeoServer versioning WFS work we have a GetDiff operation that returns a WFS Transaction that can then be applied to another WFS. This fits in with the technical part of how a patch works – they’re really easy to apply to one’s dataset. But unfortunately a WFS transaction is not as easy to read as a code patch. The other great thing about patches is that when leaf nodes are updating their data they can just request the change set – the patches – instead of having to do a full check out. So I’m still not sure how to solve this problem, the WFS Transaction is the best I’ve got, but I think we can do better, have a nice little format that just describes what changed.

Better UIs for Collaborative Mapping. More calls for rollback tools, and would like to see GE post to geoserver, etc

I think we need more user friendly options for collaborative editing. Not just putting some points on a map, but being able to get a sense of the history of the map, getting logs of changes and diffs of certain actions. Editing should be a breeze, and there should be a number of tools that enable this. Google’s MyMaps starts to get at the ease of editing, but I want it collaborative, able to track the history of edits and give you a visual diff of what’s changed. Rollbacks should also be a breeze – if you have really easy tools to edit it’s also going to be easier for people to vandalize. So you need to make tools that are even easier to rollback.

2007-07-29: Atom Futures

AtomPub sits in a very strange place, as it has the potential to disrupt 6 or more industry sectors, such as, Enterprise Content Management, Blogging, Digital/Desktop Publishing and Archiving, Mobile Web, EAI/WS-* messaging, Social Networks, Online Productivity tools. As interesting as the adoption rates, will be people and sectors finding reasons not use it to protect distribution channels and data lockins with more complicated solutions. Any kind of data garden is fair game for AtomPub to rationalize.

2007-07-30: Towards signed feeds

Why Digital Signature? This idea was first proposed by James Snell, and it’s a good one. Mind you, the benefits are a little bit theoretical, since no feed-reading clients that I’ve seen actually check a digital signature. The argument for this is similar to that for TLS; a bad guy who could somehow insert a fake press release into the feed could make zillions by gaming the share price. A verifiable digital signature would let someone reading the feed know that the news in it really truly did come from Sun.

2007-07-31: Atom for KML. Nice. I want to do more, but this is a good start. The Atom / KML meme spreads. Perception is reality, and I approve.
2007-08-03: Appfs

appfs can mount remote resources exposed via the Atom Publishing Protocol as a local filesystem.

2007-08-07: RESTful partial updates. Maybe useful for APP / KML to supplement update

over the past couple of months, there’s been a lot of discussion about the problem of partial updates in REST-over-HTTP. The problem is harder than it appears at first glance. The canonical scenario is that you’ve just retrieved a complicated resource, like an address book entry, and you decide you want to update just one small part, like a phone number. The canonical way to do this is to update your representation of the resource and then PUT the whole thing back, including all of the parts you didn’t change. If you want to avoid the lost update problem, you send back the ETag you got from the GET with your PUT inside an If-Match: header, so that you know that you’re not overwriting somebody else’s change.

Zend Google Data Client

The Zend Google Data Client provides a PHP 5 component to execute queries and commands against the Google Data APIs.

2007-08-14: Winer on Atom. Sore loser.
2007-08-19: How to deal with the sliding window problem where feed producers update more often than consumers, and consumers thus might miss entries.
A standardized way to get at previous entries that have scrolled out of a feed, and at the complete archive.
2007-08-28: YouTube GData. Nice to see more media-heavy usages. Now we have pretty much all of them, only KML is missing.
2007-10-29: APP Lock-In. So cute. Microsoft is in a tight spot: Admit they have no strategy and use APP, or invent their own. It seems they are trying to build a case to do just that.

It seems that while we weren’t looking, Google move us a step away from a world of simple, protocol-based interoperability on the Web to one based on running the right platform with the right libraries. Usually I wouldn’t care about whatever bad decisions the folks at Google are making with their API platform. However the problem is that it sends out the wrong message to other Web companies that are building Web APIs. The message that it’s all about embracing and extending Internet standards with interoperability being based on everyone running sanctioned client libraries instead of via simple, RESTful protocols is harmful to the Internet. Unfortunately, this harkens to the bad old days of Microsoft and I’d hate for us to begin a race to the bottom in this arena.

2007-12-06: FeedSync. The full syncing requirement makes this heavy weight

Although FeedSync is capable of full-blown multi-master synchronization, there are all kinds of interesting uses, including simple one-way uses. Consider, for example, how RSS typically has no memory. Most blogs publish items into a rolling window. If you subscribe after items have scrolled out of view, you can’t syndicate them. A FeedSync implementation could enable you synchronize a whole feed when you first subscribe, then update items moving forward. It could also enable the feed provider to delete items, which you might not want if the items are blog postings, but would want if they’re calendar items representing cancelled events.

Nuclear Weapons

Broken pipes and rusty fences. If that ain’t scary, few things are.

The main entrances to Los Alamos are only marginally better defended than TA-33’s land. The military-like guards keeping watch at these points certainly look fierce in camouflage paints and black bulletproof vests. But there’s little to back up the image. Their belts have gun holsters, but no guns to fill them. Around facilities like the biology lab, where anthrax and other biotoxins have been handled, no sentries stand guard at all. Nor is there any kind of fence to keep the curious and the malicious away — not even a piece of string.

2006-10-09: Might it all be posturing?

The United States Geological Survey is now reporting the magnitude of the claimed North Korean nuclear test as 4.2. This seems to be curiously low. Now, estimating explosive yield from the body magnitude of a seismic event is a tricky business, and requires knowledge of details such as the depth of the detonation and the geological properties of the surroundings, but a magnitude around 4.2 is what you’d expect for a detonation of 1 kiloton. The “natural size” of a crude fission bomb is in excess of 10 kilotons, from which you’d expect a magnitude closer to 5. It is very unlikely that a low kiloton yield device would be used in an initial test.

2006-12-03: The Agony of Atomic Genius, biographical sketch of J. Robert Oppenheimer

Now I am become death, destroyer of worlds

2008-06-28: Man-made nuclear explosions in the 1940s and 1950s released isotopes into the environment that do not occur naturally, allowing the dating of works of art.
2010-09-21: The Atom Bomb on Film. Or you could go to the atomic testing museum in Vegas and see these and much more in person.

2010-11-25: Nuke Detector. Turn a supertanker into an antineutrino detector by kitting it out with the necessary photon detectors and filling it with 10^34 protons. Then station it off the coast of suspicious countries and submerge it.
2013-11-26: India nuclear assassinations and the Indian government is mum about it. Nuclear scientists have very high mortality in Iran too, but the government there is making a huge ruckus about it.

Indian nuclear scientists haven’t had an easy time of it over the past 10 years. Not only has the scientific community been plagued by “suicides,” unexplained deaths, and sabotage, but those incidents have gone mostly underreported in the country—diluting public interest and leaving the cases quickly cast off by police.

2014-02-05: Nuclear backpacks

during the Cold War, the United States did deploy man-portable nuclear destruction. If Warsaw Pact forces ever bolted toward Western Europe, they could resort to nukes to delay the advance long enough for reinforcements to arrive. These “small” weapons, many of them more powerful than the Hiroshima bomb, would have obliterated any battlefield and irradiated much of the surrounding area.

2014-11-15: X-Ray Man

In 1957, a young man named Darrell Robertson enlisted in the US Army and participated in a secret training program in the middle of the Nevada desert. He and his fellow recruits were sworn to secrecy and, for decades, told no one of their experiences. In 1996, the US government declassified the project and Robertson was finally able to tell his story. In X-Ray Man, Robertson recalls training exercises in which the Department of Defense used him and other soldiers in nuclear tests more than 10 years after the horrors of Hiroshima and Nagasaki were already well known. Kerri Yost’s powerful short documentary is an account of how Cold War-era fears allowed for shocking treatment not just of supposed enemies, but also of those enlisted to fight against them. Though cancer has attacked his body, Robertson, supported by his wife, remains stoic and dignified, offering the quiet but forceful observation that ‘any person in the military becomes part of military science’.

2015-09-09: Nuclear wars for SETI. Nuclear explosions might be the first thing we see of other life at interstellar distances. Gamma rays are much easier to detect than radio waves, but would only last a few days at most. You’d have to be extremely lucky to catch that, but then we can spot GRB like that all the time.
2016-07-17: The H-Bombs in Turkey

Among the many questions still unanswered following Friday’s coup attempt in Turkey is one that has national-security implications for the United States and for the rest of the world: How secure are the American hydrogen bombs stored at a Turkish airbase?

2019-03-12: Trinity Test. The first detonation of a nuclear bomb

2021-02-20: $100b nuclear deterrence

To avoid being destroyed and rendered useless—their silos provide no real protection against a direct Russian nuclear strike—they would be “launched on warning,” that is, as soon as the Pentagon got wind of an incoming nuclear attack. Because an error could have disastrous consequences, James Mattis testified to the Senate Armed Services Committee in 2015 that getting rid of America’s land-based nuclear missiles “would reduce the false alarm danger.” Whereas a bomber can be turned around even on approach to its target, a nuclear missile launched by mistake can’t be recalled.

OSCOM

2002-08-21: I had a very interesting conversation with Nisheet from Netscape today. He heads the xml dom system and several other initiatives, and is now looking for ways to make the browser do more interesting stuff. We talked about how innovations happened pretty much on the server side (cms, j2ee, xml technologies) and that the browser is still stuck with basic forms for most of the gui.

Nisheet is eager to learn more about the content management open source community, and to figure out how to work with oscom to make mozilla a better platform for accessing cms. I mentioned xopus to Nisheet as an example for gui innovation, and we mused about ways to provide stuff like xopus for a wide variety of systems.

There is a lot of good technology out there in the browser that needs to be leveraged. Nisheet thinks that the interests of mozilla and oscom are well aligned and I have invited him to our mailing list so that we can start the dialog.

We agreed that discussions should be result-driven, and that we should start to look for issues that we can solve together rather than talk about interop all day 🙂 Going forward, we should ask ourselves what Mozilla can do for us, and vice versa. That may be a good approach to getting results.
Nisheet from netscape sent mail inquiring about OSCOM. Very cool. I should meet with him sometime to see what’s up. Since Mozilla supports webdav, it should be quite easy to integrate Mozilla into our interop soup.
2002-09-19:

OSCOM happens in a week. I am especially interested in progress in content syndication and exchange, the session I will be chairing with Michi from Wyona
2002-09-19: Do open source projects want to inter-communicate and share? This article asks this question in the context of OSCOM Interop, a new project to foster interoperability and sharing between open source content management systems.
by Paul Everitt and Gregor J. Rothfuss

Everybody loves the idea of the bazaar. Small, autonomous shops doing commerce in the wild, right in the shadow of the centrally-planned economy of the cathedral. But even a bazaar needs rules, right? Coordination and cooperation don’t always spring up out of thin air.

In the world of open source, developers wonder if KDE and Gnome will ever interoperate in a meaningful way. But first we have to address if the question is even legitimate. Should they?

This article discusses a budding effort towards interoperability between open source content management systems, while evaluating the question, “Why interoperate?”

Background
The market of content management has always been associated with the big boys. Large software, large consulting teams, and very large prices.

Due to a number of factors, this mindset is in decline. Smaller approaches to content management, including open source projects, are popping up constantly. The open source projects are attracting attention from mainstream analysts and journalists.

From this grew OSCOM, an international non-profit for open source content management. The basic idea is to foster communications and community amongst the creators of these open source projects. A very successful conference was held in Zurich earlier this year. Another is slated for Berkeley in September.

After Zurich, some of the presenters discussed ways to make future meetings less a parade of individual projects, and more a forum for sharing ideas and working together. This led to a discussion of interoperability amongst open source content management projects, particularly in relation to a Java Community Proposal for content repositories, created by and for the big boys.

To test drive our ability to tackle interop issues, the OSCOM folks are working on a single problem: a common way to give presentations using a “SlideML” format and a set of rendering templates.

Reality Check
We are eager to continue these discussions face-to-face in Berkeley. But we should also step back and ask, “Is interop a bunch of crap?”

It’s a serious question. Why should a project leader or project team do the extra work? Many of the best open source projects aren’t really architectures. They are programs that started by scratching an individual itch. Later in their life, if they live long enough, they realize the bigger picture and do a massive rewrite, thus getting an architecture. But rarely is this new architecture designed with the idea of talking to other, similar systems.

So interop can impose serious scope creep on the architecture of a project. Strike one.

Next, how powerful is the motivation for working with “the competition”? At the least, a project leader has little cultural involvement with other projects, and thus doesn’t have that good old maternal feeling that sparks late hours doing something for free. At the worst, one project can view another with condescension, envy, or any other mixture of emotions that come from the tribalism of balkanized projects.

Strike 2.

Finally, aren’t there already enough standards? Writing standards is a difficult process, one that doesn’t come naturally to open source developers with the ethic of “speak with code”. Shouldn’t we embrace the man-years of existing standards and focus on good implementations? (Note: the answer is “yes”.)

Beneficiaries and Their Expectations
We now have a stark, bleak picture. Thus, what is the driving need for interop, and who are its beneficiaries?

The first benefit is the “cognitive burden” that our projects place on developers. Imagine you are a consultant, and you have become an expert at Midgard. But you have a project where you need to work with AxKit. Atop the difference in programming languages, everything about the world of content management is different. Concepts, jargon, etc. If interop can give the tenuous grip of 5% commonality in approach, this can at least provide the mental connections to the next 25% of functionality.

The second beneficiary is customers who might have more than one project in use, or want to reserve the right to throw out their current project next year if they aren’t happy. Can they even get 25% of the current site’s content and configuration migrated? If not, then they are locked in. It is often argued that open source does not lock you in. But is this really true in a meaningful way? While it is certainly possible to migrate data between open source projects, or content management systems for that matter, it is by no means an easy and painless process.

The third beneficiary is the implementor of authoring tools. Imagine you are a developer at Mozilla, OpenOffice, Xopus, Bitflux, or KDE. You’d like to tie the client-side environment, where real authoring happens, into the server side environment, where collaboration happens.

There are over 10 projects presenting at OSCOM. If Mozilla has to talk to 10 different systems in 10 different ways, they will probably opt to talk to none of them. However, if the various projects agree to a thin set of common capabilities, then there is a basis for authoring-side integration.

But we’re all open source veterans here, so let’s cut the crap. Do any of these people have a right to ask for interop? This is open source, scratch your own itch, I-do-this-because-I-like-it territory. The time spent serving these beneficiaries could be better spent implementing Gadget B, which my mailing list tells me will cure cancer. Right?

Wrong, but first, let’s explore the hidden costs of the process of interop.

Hidden Costs
Doing interop is hard. It’s a lot harder than starting your own software project. Just review the mailing list archives for an interoperability project such as WebDAV. On and on, the messages go on for months and years. It takes time to distill the common wisdom from diverse perspectives into a standard that can have multiple implementations.

Harder, though, are the human issues. As we have learned with the SlideML project, you have to bootstrap a culture and a process. Most of the participants are used to being the big fish in their pond. So who is the big fish in a shared pond? How do decisions become final?

From a process perspective, standards require a different kind of rigor than software. In fact, the purpose is to render something that exists separate from the software.

Similar to the projects themselves, though, successful efforts seem to show character traits that combine intellectual confrontation with patient encouragement, with a strong dose of humor and enjoyment.

The Revenge of the Upside
We have discussed the reality check of interop, explored the beneficiaries and questioned their rights, and surveyed the hidden costs. So that’s the downside. What’s the upside of interop that makes it worthwhile?

The authors of this article are promoting the idea of pursuing interop between open source content management. We are advocates. So we’ll focus the article on the provocative questions of interop in general and thus we will limit the upside to one discussion point.

In the world of open source web servers, there is one project that has a majority of the gravity. For databases, there are a couple of projects that split the gravity. Same for desktop environments. But for content management, there are a trillion. This kind of market splintering helps ensure that the big boys are safe to dominate the mainstream, where size and stability matter more than innovation and revolution.

Interop efforts, such as the Linux Standards Base, reduce risks for the mainstream customer. Not completely, perhaps not much at all initially. But it proves that we are interested in the motivations of the mainstream.

But interop is not solely a “feature” to appeal to the mass market, it can also unleash many new possibilities. Consider XML-RPC, which brought interop to scripting languages, and is now baked into 10s of scripting environments on various platforms.

Possible Progress
The existence of OSCOM, the conferences, the budding community, SlideML, and the interop workshops in Berkeley next week are all signs that this interop effort is taking baby steps. At this early stage, we can all be prognosticators and foretell with 100% certainty the future. Choose your pick:

  • Prediction One: Interop between open source projects is a fool’s errand.
  • Prediction Two: If we stay practical and focus on small steps, we can provide value with lower risk.
  • Prediction Three: We’ll stumble across the Big Idea that is the bait to get the fish (project leaders) on the hook in a big way.
  • Prediction Four: Somebody will get sued over a patent infringement and we’ll all move to Norway.

Open Questions
There are no easy answers for interop, nor are the questions that need to be answered unique to the content management space.

How and when is interop “sexy” and arouses interest among developers? What can be learned from interop efforts that succeeded?

Is lowest common denominator functionality still worth anything? The choices are 100% interoperability (fantasy), 0% interoperability (surrender), or 20% interoperability (pragmatism).

Is 20% better than nothing?
2002-09-20: Seems like Mozilla wants to play in the cms space. By attaching itself to Mozilla, OSCOM would be taken on a very exciting ride. The usual cautionary notes about “dancing with the gorilla” apply. Very very interesting.
2002-09-24: amars:

OSCOM’s mere existence has got to be one of the most ridiculous things i’ve seen in recent times. with repressive governments killing their people, exploitive businessmen/politicians trying to reduce our freedoms, children starving, etc… there are better things to devote one’s resources to than a stupid organization dedicated to a stupid redundant cause.

There seems to be a point where one’s actions are visible enough to start attracting that sort of criticism. It is flattery in a weird way. Austin marshall raises some valid points. Will we be able to live up to the expectations that people start piling on us? Is interop worth it?

The more I learn about various cms, the more I get to value the content that is stored in them over the specific implementation that differs in each cms yet is still very much alike. Tastes and requirements change. I will start working on a java cms full time next year, besides my current interest in various php-based cms. I’m very much interested to take my content with me when I switch between systems. Indeed there are 100s if not 1000s of cms systems out there. It’s probably one of the most fragmented markets within IT. I consider my time working on interop well-spent, certainly much more so than creating yet another cms.
2002-09-26: OSCOM Day 1. Keynote
Charles Nesson mentioned that education needs open source. He encouraged the OSCOM attendees to work with him to bridge the digital divide in jamaica. He showed the audience a letter from a government official that strongly supports open source. The minister invites OSCOM to help jamaica build a digital infrastructure. Harvard / Prof. Nesson is looking for volunteers that want to spend some time in jamaica to help establish some of the infrastructure.

Midgard
Of particular interest is that Midgard has now been ported to ADODB, which makes a potential port of the PostNuke API on top of Midgard that much easier.

Paul on interop
Paul Everitt was giving his zope cookbook recipe again, but he then embarked upon how interop would change the game for open source CMS by pooling resources and making previously unrealistic projects feasible by sharing them within a group of cms. I’m glad he took the time to go into interop, he pretty much set the stage for the conference by doing so.

Bitflux / Xopus
A neck-to-neck competition between the 2 wysiwyg editors that both are so advanced that Netscape was impressed enough to promise to work on any feature requests that OSCOM would bring up. From a technical perspective it looks like Xopus has the lead, but the best thing to happen would be a fusion of functionality between the 2 editors.

Search Engines
Avi Rappaport talked about open source search engines and how CMS vendors should implement better support for last-modified date, metadata, and rss 1.0 support. It was promising to hear that Avi reached the same conclusion (RSS as a starting point for announcing recently changed pages).

Dinner
I talked at length with Lon from Q42 who complained that the open source release of Xopus had failed to bring new business their way. I pointed out that Xopus was still largely unknown in the CMS landscape, and that it may be feasible to get funding to build the next generation of Xopus.

I’m now very tired and still have to finish my slides / convert them to slideml.
2002-09-29: The conference has been a great success. I will need several days to digest all the new initiatives, and announce them to the proper channels. I’m VERY excited about the prospects. It looks like we will have another conference in march at Harvard, followed by OSCOM IV in Tokyo in September 2003.
2003-03-18: The sprint last week was a resounding success. We got a lot done, and had an awesome amount of fun. We now have over 40 proposals for OSCOM III, and daily subscriptions to the participants list. If we don’t totally screw it up this will be one heck of a conference.
2003-04-18: Roger and myself are starting to assume our roles as track chairs at the upcoming OSCOM 3 conference.
Roger put together a couple questions, and I just stumbled across a rather scathing comment on metadata. I wonder what our semantic web experts have to respond to that?
I just sent out these questions by email, as soon as I will get answers I will start to trackback these talks. I’m rather excited about the potential to have the talks annotated and commented on before they are even given, all aggregated in one place.2003-04-21: Marco Federighi had this to say about his talk.

Why is metadata, the semantic web important in general and for you personally?

For me personally, it has no importance whatsoever. For the web in general, metadata does the same job as chapter headings, page headings, footnotes and a good analytical index do for books. Ever tried to USE a book with a lousy index? Nearly useless.

How long do you think will it take that a web majority will use metadata, engage in the semantic web?

Majority of whom? Users will engage in the semantic web as much as their favorite search engines engage in it. Web authors will if they are provided with easy ways of adding metadata in a structured manner. Since this is OSCOM and not an academic discussion about the semantic web, the question is: can CMS help at all? That’s what I am trying to answer in my presentation. And, in case you ask, I haven’t got an answer yet ;-).

Do you see any hindrances in the adoption process? If yes, of what sort are they?

The fact that people will overdo it with metadata: add irrelevant, verbose, untruthful stuff by the bucket load.

What are the next important steps to take concerning metadata, the semantic web?

Engage “natural” audiences in creating metadata standards for their areas of interest. Scientific research is an obvious candidate, with metadata standards largely there already for paper publishing. The web was started at CERN to exchange scientific information; my wilds guess is that the semantic web will take off with online publishing of scientific papers. OK, bit of a professional bias there ;-).

Can you point me to an interesting project/person in this regard?

No. I come from the other end of the spectrum, CMS USE (not development). I am still very much a newbie there. Although, periodic chats with Susan Hockey are helping :-).

Maybe even a project/person who you miss at this conference.

Not really.

Why should people attend your presentation?

To discuss if CMS and the semantic web have anything to do with each other. My first reaction when the conference was announced was “what the heck has the semantic web to do with OSCOM?”

What should one know before going to your presentation?

Not sure yet. Having read one of the many online primers on the semantic web might help.

Do you see any overlapping with other presentations in Track One?

Not much, but as we get more comments some overlap might develop.

2003-04-26: I will be on a panel tentatively titled “you can’t make money with open source”
I will probably make the case for a relentless standards-driven approach, one that reduces lock-in for the customer, ushers in replaceability. Granted you may be replaced yourself, but you have to press on.
Sun touts open standards over open source. Initial thoughts: most open source programs don’t give a damn about open standards. They create their own puny data formats, and you are locked in for all practical purposes. Something that escapes popular notice.
2003-05-06: Paul and myself wrote an article based on our experience with OSCOM about the art of getting different open source projects and communities to work together, towards shared goals. I’m really curious to learn whether our thesis that open source doesn’t equate open minds can be supported by others.
2003-05-16:

Right now, a lot of cooperation seems to be focused instead on “interoperability.” In a recent article, 2 open-source veterans discuss how they have overcome their skepticism about early OSCOM efforts to foster more interoperability among open-source CMS packages. Indeed, some progress has been made on common standards. This is useful, but doesn’t solve the key problem of too many undead projects. Moreover, I believe that customers remain less concerned about interoperability between prospective CMS implementations, and more about integration between CMS applications and other enterprise systems like document management, print publishing, or asset management systems — many of which inevitably are commercial in origin.

To which i say, our interop efforts are exactly targeted towards offering a common interface to other applications. As i explained earlier, there are 2 different aspects of interop: between cms and other applications, and between 2 cms.
I largely agree with Tony’s analysis, and I believe we are on the right track.

The community could also be more active in adopting larger foundational efforts as the base of their systems — employing specifically the diverse Apache initiatives, from web server to repository. 5 years from now, it’s highly likely that buyers are going to choose from among CMS solutions built on top of 4 major “enterprise” families: Oracle, IBM, Microsoft, and Apache. Might as well start aligning with the Apache project now.

Daniel Veillard notes the similarities between the xml and open source communities.
2003-05-23:

The continued dramatic growth in content management systems (CMSs) and technologies there are 100s of CMSs, including 10s of open source tools has defied the usual rules of business software markets. The number of new product launches by old and new companies somehow still manages to keep ahead of the ongoing consolidation. This is very healthy. However, even a full-time market analyst paid to be a content management expert is not going to be able to keep up with all the products and features, especially since managing content involves technologies that go well beyond a CMS. Fortunately, there is an industry effort gathering steam to provide an open and free list of CMS products and features. This public domain classification will be based on an XML schema (CMSML) so that anyone can use the information.

article in The Gilbane Report on the OSCOM CMSML effort. I’m a coauthor.
2003-05-29: We are finally online. Jacked into irc, blogging can begin.. #oscom on freenode.net
Tony Byrne starts off with remarking that the commercial cms world is increasing their lead over oss. Why is that?

  1. Intelligent word copy-paste (amazing this one is #1 🙂
  2. intuitive versioning (very nice, marks diffs inline)
  3. visual workflow cues (basically graphically laying out the state of a workflow)
  4. browser-based editing of images
  5. pre-localized interface (basically the GUI is multilingual from the start)
  6. in-context editing (editing navigation on a page with propagation)
  7. dependency reports
  8. useful reporting (never logged in 🙂
  9. forms-based workflow (makes it easy to create ridiculously complex workflow, even has some wfml support for visio)
  10. 508-compliant output
  11. distributed user management
  12. integration with other tools (connectors)
  13. migration tools (tidy on steroids)
  14. brower-based content object development
  15. open architectures: container and repository independence
  16. real-time LDAP integration
  17. Separation of Management and Delivery

Whew. What a list. Very impressive. Will have to let this stuff sink in.
Tony ends with a manifest for oss cms:

  • Fewer projects, more product managers
  • move up the stack (no yet another repository)
  • Create more common tools (Twingle, WebDAV..)
  • Create a common lingo (CMSML)

Question: how likely is lock-in avoidance? Answer: The market pressure will bring vendors around, oss??
Question: Is the market ready for XML databases? Answer: Not yet. Sleepycat is urging OSS vendors to bake XML support in though.
This is a very good talk, I’m looking forward to the video.
2003-05-30: <bergie> Publishing is an engineering principle
<gregor> jon is focusing on his recent topic to do the simple things
<gregor> https://web.archive.org/web/20030603172734/http://weblog.infoworld.com/udell/2003/04/10.html
<bergie> The reward of providing meaningful titles is attention and influence, as your content is found more easily.
<gregor> there is no right unit of content
<bergie> This is a problem in blogging, as you might have many items on the same HTML page. What should be the title then? The date?
<gregor> heh, brent’s law of cms urls is brought up
<bergie> URLs should be “pronounceable”.
<bergie> Law of CMS URLs: the more expensive the CMS the crappier the URLs
<gregor> https://web.archive.org/web/20021019112409/http://ranchero.com/2002/09/30.php
<gregor> as usual, jon comes through with great clarity
<bergie> For organizing search results there are 3 main fields: HTML title, URL and the raw content, in order of importance.
<gregor> you can infer lots of metadata from URLs, doctitles
<gregor> this is rarely leveraged
<bergie> Title should also provide publishing organization, possibly site area.
<gregor> structure within the title tag can help to organize
<gregor> site :: area :: topic for instance
<gregor> you gain a lot from careful URL design. jon was able to gleam structure just by analyzing URLs at o’reilly and making assumptions
<gregor> which worked out
<gregor> consistency is low tech, high value
<bergie> CMSs should help content creators to understand and effectively use these techniques.
<gregor> mailing list information architecture sucks
<bergie> Subjects are also very important on mailing lists to make more useful mailing list archives.
<bergie> Why do mail archives always show just the subject and author? Why not a snippet of content?
<danbri> <bergie> Law of CMS URLs: the more expensive the CMS the crappier the URLs
<gregor> jon would like to see one line summaries of mailing list posts instead of just the subject
<danbri> lol
<danbri> so true
<bergie> Show what the email is about in the subject field
<gregor> ThreadsML is brought up
<gregor> https://web.archive.org/web/20030513180127/http://www.threadsml.org/
<gregor> weblogs reinvent discussion
<bergie> Unfortunately threading might break if you change subjects. Consider implementing ThreadML support.
<gregor> can’t we rethink how to name shared content while we are at it?
<gregor> q: what is threadsML?
<gregor> a: discussion is a portable unit of content. you could rip a subtree of discussion from a site and put it somewhere else
<gregor> seems to be tightly related to proper URI design
<gregor> REST style if i remember correctly
danbri hmms re NNTP and doing just that (shipping discussions around)
<gregor> hehe SlideML is being dissed 🙂
<bergie> “inventing new standards is a sign of weakness”
<besfred> hiya, is the keynote of jon is this the keynote channel ?
<gregor> yes
<besfred> 🙂
<gregor> dw: OPML
<gregor> q: what about OPML?
<bergie> We have 10s of XML formats but not easy tools for regular people to produce meaningful content with them
<gregor> a: powerpoint is an outliner too. aaron did a critique of powerpoint presentations based on tufte
<bergie> Problem with OpenOffice and PowerPoint is that while they’re easy to use they don’t allow meaningful web publishing
<gregor> LOL jon wrote his own slide formats
<gregor> “i’m of the geek tribe, so i had to invent my own”
<besfred> hehe
<gregor> CMS started out in the print world. web is 2nd thought
<bergie> <s:content>INSERT YOUR CONTENT HERE</s:content> is not for those who don’t use Emacs
<gregor> weblogs are the first medium truly for the web
<gregor> why is that?
<bergie> CMSs came from print world, things like deep linking were not considered.
<gregor> – deep linking
<gregor> heh henri 🙂
<gregor> we are still stuck in 1995-era linking technology
<gregor> universal canvas, another long running jon topic
<bergie> Editing UIs are too non-web-like, most CMSs either provide basic TEXTAREAs or IE DHTML Edit control (and Midas)
<gregor> https://web.archive.org/web/20030109124901/http://udell.roninhouse.com/bytecols/2001-06-06.html
<bergie> Things like drag-and-drop linking, image editing and table management is too difficult.
<gregor> an opportunity for a lightweight web writing tool (Xopus, Twingle?)
<gregor> “Microsoft does VERY interesting stuff in XML. consider Infopath, for instance.”
<bergie> There would be a huge opportunity for lightweight, web-aware writing tool that integrates with CMSs (Xopus, Twingle, Bitflux Editor?).
<gregor> What about compound document?
<gregor> “compound docs on the web are a deep & unsolved problem”
<bergie> Old-style desktop tools give the illusion that you’re only managing a single document (oscom.ppt) instead of several pages that require meaningful URLs and titles.
<zoned> udell rocks
<gregor> View Source is VERY important. Binary formats thus don’t count it
<bergie> Central lesson of WWW is “View source”, leading to “sharing is good”
<gregor> zoned: word
<zoned> he is pitching twingle
<gregor> he is pitching low tech
<gregor> i love that
<zoned> simplicity that enables rich simplicity
<gregor> yep
<gregor> being offline will go away, thus you may be able to solve the referring problem better.
<gregor> (still talking about compound docs)
<gregor> q: versioning with compound documents?
<bergie> This compound document problem might go away as everybody is connected and web becomes the primary content distribution forum (no need to email a single file, put it to floppy, etc).
<gregor> q: you seem to treat google as another UI
<gregor> a: yes, google is a very important UI
<gregor> refactoring on the large vs on the small
<gregor> CMS are good at the large refactoring tasks
<bergie> CMSs excel in refactoring “in the large”, like rearranging trees, make changes consistently across documents, handle access control
<gregor> suck at small tasks, such as capturing email threads
<Morbus> morning all.
<gregor> “XML-oriented outliner would help” (with smirks toward dw)
<bergie> However, CMSs are not good with “in the small” problems like capturing email threads, creating documents that synthesize content from existing documents, maintain and reorganize links
<gregor> jon is advocating the personal CMS
<Morbus> gregor: any word on video of the keynote?
<gregor> it will be processed
<bergie> XML-oriented outliner could help here (wink towards Dave Winer regarding OPML)
<gregor> jon is being taped as we speak
<besfred> w00t
<gregor> “how will normal ppl write the semweb”?
<gregor> heresy: content and presentation shouldn’t always be separate
<gregor> the value of categorization
<bergie> How will normal people write for semantic web? Idea: content and presentation could be combined.
<bergie> Categorization should produce immediate visual results.
<bergie> Tagged items should be reflected in many presentations like HTML rendering, URL, RSS feed, …
<gregor> https://web.archive.org/web/20030628160840/http://www.xml.com/pub/a/ws/2003/05/27/allconsuming.html
<gregor> as an example of leveraging categorization
<Morbus> (is this a transcript, or just idle talk?)
<gregor> transcript
<Morbus> thanks.
<bergie> An “all consuming” Amazon book aggregator is shown as example of easy use of categorization
<gregor> how do we make categorization seamless?
<bergie> We should be able to tell that “this link is a person, that link is to a product page”
<gregor> moving on to XML databases.
<bergie> Example: mark a specific item with a CSS class, then you can make it visually separate and locate it using Xpath
<gregor> it’s about “small pockets of structure”
<zoned> https://web.archive.org/web/20030107044619/http://www.netcrucible.com/blog/2002/12/20.html
<gregor> another notion from jon, the semistructured structure
<gregor> http://web.archive.org/web/20030422163255/http://archive.infoworld.com/articles/pl/xml/02/10/28/021028plxmlclient.xml
<bergie> slides: http://web.archive.org/web/20030811172202/http://weblog.infoworld.com/udell/misc/oscom/intro.html
<bergie> “If I write a document and put in a quotation or code fragment, I should be
able to categorize it. The categorization should be shown visually”
<gregor> wow
<gregor> that was awesome
<chregu> yeah, great.
<bergie> agreed, and pitching Twingle 🙂
<Morbus> gregor: lemme know when videos go up. i am so jonesing right now 😉
2003-06-02: i’m back from oscom 3. the flight was a PITA, with 4 babies almost canceling each others cries out, but just almost 😦 oscom was great, and many good things should come out of it going forward.
for now it’s back to the slave pits though.
2003-06-22:

OET: What “lesson learned” from your SlideML project can you share about getting different Open Source projects to work together?

Rothfuss: There are 2 key ones. First, microsteps are preferable to lofty goals that you never reach anyway. Second, the same technical issues pop up in different environments, whether they be Open Source or commercial software projects. You just need to find a common language to recognize that you’re sitting in the same boat.

OET: What are the hazards a developer team should be aware of when they begin to introduce Open Source into a commercial enterprise?

Rothfuss: Not all Open Source software has a sufficiently large community to support its further development. In that vein, becoming a respected Open Source citizen takes some work, but it’s crucial for organizations if they ever want to roll back their modifications into the main line of development.

Also, getting support is much easier if a community respects you. This usually means organizations have to adopt a humble approach toward Open Source communities. For instance, IBM was not welcomed to Apache by virtue of its brand, but rather, each IBM employee had to prove his worth by valuable contributions.

Given enough resources, Open Source software, of course, integrates even better than proprietary software because all pieces can be molded as needed. So, for practical purposes, it makes much more sense to look out for standards support.

2003-06-25: I will be speaking about moblogs at Seybold San Francisco 2003 (Thursday, September 11). Meanwhile, Michi will be speaking about open source content management (Tuesday, September 9) at the Gilbane conference, which is part of Seybold.
2003-07-23:

The video from the panel You Can’t Make Money with Open Source at OSCOM 3 with Charles Nesson, Ed Boyajian, Ed Kelly, JT Smith and me is now online. Thanks bob for all the hard work to bring this online.
2003-09-03: Here is Tony Byrne’s presentation from OSCOM 3 that lists neat features of commercial CMS that are not yet available in open source CMS. A list for inspiration.
2003-09-04: Michi managed to assemble a nice crowd for the OSCOM sprint at seybold. Very much top notch people, I wonder what my role should be there 🙂
2003-09-10: I was just conversing with Lauren Wood who chairs the XML conference in Philadelphia in December. She is interested to have an OSCOM sprint on the show floor. We will try to make it happen. Also, Oreilly expressed interest to host the next OSCOM within OSCON (try saying that fast 3 times :). I guess I’m now firmly on the conference circuit. Say hello to air conditioned hotel lobbies.
2003-12-07:

OSCOM (Open Source CMS association) is organizing another “hackathon” to encourage development and — dare we hope! — inter-project cooperation on open-source client tools. Held during late January in Zurich, Switzerland, this hackathon, or “sprint,” will focus on various open-source authoring approaches, such as Twingle, Bitflux, and plain old Mozilla… sign up to participate

Amen, tony.
2004-01-23:

2004-03-03: We made some changes at oscom.org recently to increase customer satisfaction 😉 These include:
Planet oscom
planet oscom aggregates weblogs with oscom-related content. Send me an email if you have a feed that should be added. You may want to read up on category feeds as we don’t want to syndicate your cat pictures, just the oscom posts.
Self-serve oscom matrix
You can now request an account with michi ( @ apache org), and maintain the data for your open source cms yourself.
General mailing list
The general @ oscom.org mailing list is now open for business (again). You can ask and answer questions or make comments with regard to Open Source Content Management, and be informed on news about the next conferences, hackathons/sprints and other events
Upcoming events
We now list upcoming events on the oscom front page.
2004-05-22: I went and added more feeds to the planet oscom aggregator. For now, top-level feeds are oscom-related or main project feeds. I also created sub categories:

to do:

  • integrate sub categories into main navigation
  • list names of aggregated people
  • add more project feed lists (as OPML preferably)

2004-06-15: The RFP for OSCOM.4 (Sept. 29 – Oct 1, Zurich, Switzerland) is now available. The theme of OSCOM.4 is “Cross-Pollination”. This will be a conference with assistance from the Apache Software Foundation for the ApacheTracks content. The conference will have 4 tracks:

  1. OSCOM Technical / Community Track
  2. OSCOM Business / Legal Track
  3. ApacheTrack 1
  4. ApacheTrack 2

2004-07-22: So it turns out that one of the proposals we received for oscom.4 was a scam from someone in nigeria to get a visa to switzerland and then disappear. I was wondering about the low quality of that particular proposal.
2004-08-09: Join me at the OSCOM hackathon at ACM Hypertext 2004 in Santa Cruz, August 9-13. We will work on interesting content management problems and hopefully interact with hypertext researchers from all over the world. The event is bound to be an excellent time.
2004-08-11: Registration for OSCOM.4 with Apache Tracks at ETH Zurich, Switzerland from Wednesday, September 29th – Friday, October 1st, 2004 is now open. The program has many interesting talks for people interested in content management and Apache technologies.
2004-09-29: I am helping to organize OSCOM.4 in Zurich.
2004-10-01: Motivated by Danese Cooper who named oscom 3 (and yours truly) as an influence to push corporate blogging at sun, we’d like to expand the number of feeds we carry on planet oscom. If you are a developer or user of cms (or of underlying technologies, such as mysql, php or apache) we’d like to hear from you.
2005-05-16:
OSCOMTag banner

Als Teil des LinuxTag 2005 wird es einen sogenannten OscomTag geben. In seinem Rahmen findet eine Konferenz zum Thema Open Source CMS statt. Ausserdem werden einige bekannte Open Source CMS Projekte in einen freien Stand des LinuxTag ausstellen.

There will be an exhibition of Open Source CMSs, presentations and panels. They are still looking for talks and if you have an Open Source CMS to present, get in contact with the organizers. While I won’t be there, unfortunately, this looks like it will be a winner.

More information can be found on the OSCOM site.

2005-06-23: My article on the state of open source content management is now available, via content-wire. The blurb reads:

Content management is no exception to this shift toward open source tools. Gregor J. Rothfuss writes about the current state of open source content management and identifies the important applications that will continue to evolve in the next 10 years. Rothfuss describes the progress on a standard for repository-based content (JSR 170) and the fascinating advances driven by software for online collaboration used by open source projects. Finally, he provides a snapshot of the leading open source CMSs – Apache Lenya, Midgard, OpenCms, Plone, and TYPO3. Based on the concepts and applications Rothfuss describes, open source projects will continue to challenge the market position of the traditional vendors by promoting open standards and innovation.

WHERE TO FROM HERE?
If you’d like to keep a closer eye on developments in the open source CMS space, there are many interesting options. For a start, OSCOM regularly organizes conferences and offers Planet OSCOM, a news aggregation service that provides news items from the leading open source CMSs. The CMS page at http://del.icio.us, a social bookmarking service, allows you to see what other people bookmark under the CMS category. I find this to be an excellent way to track emerging trends, or “buzz.” And this is exactly what I recommend you do: keep an eye on the open source landscape, even if you remain a buyer of proprietary software.