The Open Source Business Conference (OSBC) is the first forum to not only explore the role of open source in shaping the future of embedded, server, and desktop markets, but also to explore new opportunities OSS presents for both startup and established technology vendors, and how to capitalize on them.
Subject: Important Security Update for the .NET Messenger Service
Date: 19 Aug 2003 02:23:18 -0700
From: .NET Messenger Service Staff dot_net_msgr_svc@msgr.hotmail.com
ATTENTION: IMMEDIATE ACTION REQUIRED FOR MSN AND WINDOWS MESSENGER USERS.
I got that email 1113 times so far. Of course, that number is laughable compared to the 10s of emails I got from the SoBig virus today. It seems no day passes without a Microsoft incident.
I support the Log Format Roadmap because it has a fighting chance to become the first practical step to achieve CMS content interop. Blogs will drive adoption of the principles stated in against the grain. As a weblog vendor, I support it because it will drive the adoption of better tools, and will increase the market for everyone. 2003-06-27: Sam Ruby has been spearheading a major standardization effort in the blog world recently, and he has this to say about his motivations:
About a month ago, my interest and activity in this space kicked into high gear. I started attending weblogging conferences.
Far from claiming to have been the inspiration, it is still very nice to think that OSCOM was able to contribute to the drive towards standardization. This is the stuff we are talking about. 2004-06-08: So that is what Greg Stein has been up to. The sprint was much fun, as were the drinks.
Ever since Atom first popped up, I’ve been interested in it, and even attempted to join a small sprint/discussion at Seybold last year to talk about WebDAV. The bomb threat shut that down, but we simply moved locations for drinks rather than hacking 🙂 So while I’ve been tracking it generally, my specific current interest is through my work at Google. I’m the engineering manager for the Blogger group, so I’ve gotta pay some attention to what we’re signing up for 🙂
2005-09-06: All feeds for this blog now serve Atom 1.0. It will be interesting to watch if anyone notices / cares. Longer term, /atom.xml is the canonical url if you want to subscribe. 2006-10-18: RSS / ATOM / OPML schematron is much easier to work with than the mysterious feed validator code. Plus it works for really huge feeds. This has the README for the RSS validator. Pretty out of date, but a good starting point. For one, you need the latest schematron from Rick Jellife, not the old one on this site. 2006-11-21: Gdata JSON. They also do jsonp, and reuse the Atom serialization. 2006-12-01: GData for Google Spreadsheets. The data web circle gets more complete. This is (one) counterpart to the web formulas in Google Sheets. Now as to how GData can play in the semweb space. Maybe via Queso. 2007-01-31: Tim wonders how to use Atom categories properly. Link to the wikipedia url of the tag I’d say. 2007-02-14: If you browse to a page with an RSS or Atom feed, you get the option to immediately add that feed to Google Reader for mobile via Mobile Proxy Feed Discovery. 2007-03-31: Some Atom extensions by nature to encourage text mining. I don’t know.. They do not seem to reuse core Atom in their examples. Plus I am not sure how useful a word count really is. 2007-05-16: GdataServer
Generally speaking, the Lucene GData Server is an extensible syndication format server providing CRUD actions to alter feed content, authentication, optimistic concurrency and full text search based on Apache Lucene.
2007-05-25: APP frontend to LDAP. This might enable some interesting scenarios. 2007-06-01: Opensearch / Atom interface for Swiss whitepages. Nice!
Wir haben für unser Telefonbuch eine Schnittstelle entwickelt, welche es erlaubt unsere Telefondaten in anderen Applikationen oder Websites zu integrieren. Die Schnittstelle basiert auf dem Konzept von REST. Die Resultate werden als Atom-Feed geliefert, welcher mit OpenSearch- und tel.search.ch-spezifische Felder ergänzt ist. Mit Hilfe eines Schlüssels werden die Resultate auch strukturiert zurückgeliefert. Die Resultatzahl ist pro Abfrage auf 200 Einträge beschränkt.
Gregor Rothfuss wondered whether I couldn’t influence people at Microsoft to also standardize on GData. The fact is that I’ve actually tried to do this with different teams on multiple occasions and each time the I’ve tried, certain limitations..
2007-06-10: Oy. And all this because I asked Dare why Microsoft doesn’t use APP.
There was quite a flurry of blogging about the Atom Publishing Protocol (APP) over the weekend, all kicked off by Dare Obasanjo’s criticisms of the protocol. Some of the posts were critical of Dare and his motives, but I’m thankful he started the conversation.
2007-07-26: The chorus for putting more REST into GIS / mapping gets louder, yay
The only thing needed to bring together this messy new world Atlas, is a global agreement about the structure of the data used to annotate the maps, as well as agreement on the format for retrieving such.
2007-07-28: WFS simple was hijacked, as usual, by people who don’t understand why worse is better. This is why I am not in the least interested in WFS and am betting on APP instead.
if the geospatial standards community continues on this path of isolating itself, of looking upstream to the ISO rather than downstream to the distributed neogeo developer community, it will miss out on being connected to amazing things.
Here’s a Feature Demo of a RESTful WFS-T with a call for GE to support posting of features. I would go further and ask for APP support.
Version control for Collaborative Mapping. Calls for diffs and patches. Might be built on top of an APP infrastructure, imho
The next major area of tool improvement I see is expanding the wiki notion of editing to more of a merging revision control model, with branches, versions, patches and eventually expanding in to distributed repositories. The ‘patch‘ is a small piece of code that can be applied to a computer program to fix something. They are widely used in the open source software world, both to get the latest improvements, and to allow those who have commit rights to a source repository to review outside improvements before putting them in. This helps create the meritocracy around projects, as they don’t let just anyone in to the repository as they might break the build. Such a case is less likely with maps, but sometimes core contributors might want to see a couple sample patches before letting a new member in. In the GeoServer versioning WFS work we have a GetDiff operation that returns a WFS Transaction that can then be applied to another WFS. This fits in with the technical part of how a patch works – they’re really easy to apply to one’s dataset. But unfortunately a WFS transaction is not as easy to read as a code patch. The other great thing about patches is that when leaf nodes are updating their data they can just request the change set – the patches – instead of having to do a full check out. So I’m still not sure how to solve this problem, the WFS Transaction is the best I’ve got, but I think we can do better, have a nice little format that just describes what changed.
Better UIs for Collaborative Mapping. More calls for rollback tools, and would like to see GE post to geoserver, etc
I think we need more user friendly options for collaborative editing. Not just putting some points on a map, but being able to get a sense of the history of the map, getting logs of changes and diffs of certain actions. Editing should be a breeze, and there should be a number of tools that enable this. Google’s MyMaps starts to get at the ease of editing, but I want it collaborative, able to track the history of edits and give you a visual diff of what’s changed. Rollbacks should also be a breeze – if you have really easy tools to edit it’s also going to be easier for people to vandalize. So you need to make tools that are even easier to rollback.
AtomPub sits in a very strange place, as it has the potential to disrupt 6 or more industry sectors, such as, Enterprise Content Management, Blogging, Digital/Desktop Publishing and Archiving, Mobile Web, EAI/WS-* messaging, Social Networks, Online Productivity tools. As interesting as the adoption rates, will be people and sectors finding reasons not use it to protect distribution channels and data lockins with more complicated solutions. Any kind of data garden is fair game for AtomPub to rationalize.
Why Digital Signature? This idea was first proposed by James Snell, and it’s a good one. Mind you, the benefits are a little bit theoretical, since no feed-reading clients that I’ve seen actually check a digital signature. The argument for this is similar to that for TLS; a bad guy who could somehow insert a fake press release into the feed could make zillions by gaming the share price. A verifiable digital signature would let someone reading the feed know that the news in it really truly did come from Sun.
2007-07-31: Atom for KML. Nice. I want to do more, but this is a good start. The Atom / KML meme spreads. Perception is reality, and I approve. 2007-08-03: Appfs
appfs can mount remote resources exposed via the Atom Publishing Protocol as a local filesystem.
2007-08-07: RESTful partial updates. Maybe useful for APP / KML to supplement update
over the past couple of months, there’s been a lot of discussion about the problem of partial updates in REST-over-HTTP. The problem is harder than it appears at first glance. The canonical scenario is that you’ve just retrieved a complicated resource, like an address book entry, and you decide you want to update just one small part, like a phone number. The canonical way to do this is to update your representation of the resource and then PUT the whole thing back, including all of the parts you didn’t change. If you want to avoid the lost update problem, you send back the ETag you got from the GET with your PUT inside an If-Match: header, so that you know that you’re not overwriting somebody else’s change.
Zend Google Data Client
The Zend Google Data Client provides a PHP 5 component to execute queries and commands against the Google Data APIs.
2007-08-14: Winer on Atom. Sore loser. 2007-08-19: How to deal with the sliding window problem where feed producers update more often than consumers, and consumers thus might miss entries.
A standardized way to get at previous entries that have scrolled out of a feed, and at the complete archive. 2007-08-28: YouTube GData. Nice to see more media-heavy usages. Now we have pretty much all of them, only KML is missing. 2007-10-29: APP Lock-In. So cute. Microsoft is in a tight spot: Admit they have no strategy and use APP, or invent their own. It seems they are trying to build a case to do just that.
It seems that while we weren’t looking, Google move us a step away from a world of simple, protocol-based interoperability on the Web to one based on running the right platform with the right libraries. Usually I wouldn’t care about whatever bad decisions the folks at Google are making with their API platform. However the problem is that it sends out the wrong message to other Web companies that are building Web APIs. The message that it’s all about embracing and extending Internet standards with interoperability being based on everyone running sanctioned client libraries instead of via simple, RESTful protocols is harmful to the Internet. Unfortunately, this harkens to the bad old days of Microsoft and I’d hate for us to begin a race to the bottom in this arena.
2007-12-06: FeedSync. The full syncing requirement makes this heavy weight
Although FeedSync is capable of full-blown multi-master synchronization, there are all kinds of interesting uses, including simple one-way uses. Consider, for example, how RSS typically has no memory. Most blogs publish items into a rolling window. If you subscribe after items have scrolled out of view, you can’t syndicate them. A FeedSync implementation could enable you synchronize a whole feed when you first subscribe, then update items moving forward. It could also enable the feed provider to delete items, which you might not want if the items are blog postings, but would want if they’re calendar items representing cancelled events.
It is becoming increasingly clear that, if a useful device for quantum computation will ever be built, it will be embodied by a classical computing machine with control over a truly quantum subsystem, this apparatus performing a mixture of classical and quantum computation. This paper investigates a possible approach to the problem of programming such machines: a template high level quantum language is presented which complements a generic general purpose classical language with a set of quantum primitives.
A very interesting paper, basically stating that any quantum computer will need a classical front end to deal with data pre- and post processing. even the very pragmatic distinction between call-by-value and call-by-reference needs to be rethought:
It is well known that the no-cloning theorem excludes the possibility of replicating the state of a generic quantum system. Since the call-by-value paradigm is based on the copy primitive, this means that quantum programming can not use call-by-value; therefore a mechanism for addressing parts of already allocated quantum data must be supplied by the language.
2007-02-12: D-Wave 16 qubit prototype, with hopes for a 1024 qubit system in late 2008. funny: they are not sure if it is a quantum computer at all, it might be an analog computer. 2007-04-22: the quantum computation version of the stacked turtle
But it was still pretty exciting stuff. Holy Zarquon, they said to one another, an infinitely powerful computer? It was like a 1000 Christmases rolled into 1. Program going to loop forever? You knew for a fact: this thing could execute an infinite loop in less than 10 seconds. Brute force primality testing of every single integer in existence? Easy. Pi to the last digit? Piece of cake. Halting Problem? Sa-holved.
They hadn’t announced it yet. They’d been programming. Obviously they hadn’t built it just to see if they could. They had had plans. In some cases they had even had code ready and waiting to be executed. One such program was Diane’s. It was a universe simulator. She had started out with a simulated Big Bang and run the thing forwards in time by 13.6b years, to just before the present day, watching the universe develop at every stage – taking brief notes, but knowing full well there would be plenty of time to run it again later, and mostly just admiring the miracle of creation.
For “generic” problems of finding a needle in a haystack, most of us believe that quantum computers will give at most a polynomial advantage over classical ones.
2011-01-20: 10b qubits is very significant. i am sure there are all sorts of caveats, but still: wow 2011-10-04: Philosophy and Theoretical Computer Science class by Scott Aaronson.
This new offering will examine the relevance of modern theoretical computer science to traditional questions in philosophy, and conversely, what philosophy can contribute to theoretical computer science. Topics include: the status of the Church-Turing Thesis and its modern polynomial-time variants; quantum computing and the interpretation of quantum mechanics; complexity aspects of the strong-AI and free-will debates; complexity aspects of Darwinian evolution; the claim that “computation is physical”; the analog/digital distinction in computer science and physics; Kolmogorov complexity and the foundations of probability; computational learning theory and the problem of induction; bounded rationality and common knowledge; new notions of proof (probabilistic, interactive, zero-knowledge, quantum) and the nature of mathematical knowledge. Intended for graduate students and advanced undergraduates in computer science, philosophy, mathematics, and physics. Participation and discussion are an essential part of the course.
2013-04-13: Quantum computing since Democritus. Written in the spirit of the likes of Richard Feynman, Carl Sagan, and Douglas Hofstadter, and touching on some of the most fundamental issues in science, the unification of computation and physics. kind of like a new kind of science was, without the bs. Plus Scott is a funny guy, so even if you only understand 5% (likely, given the deep topics), seems worth it. If you want to get a taste, try this paper: NP-complete Problems and Physical Reality 2017-07-09: Multi-colored photons
the technology developed is readily extendable to create 2-quDit systems with more than 9000 dimensions (corresponding to 12 qubits and beyond, comparable to the state of the art in significantly more expensive/complex platforms).
2018-10-09: Quantum Verification. How do you know whether a quantum computer has done anything quantum at all?
After 8 years of graduate school, Mahadev has succeeded. She has come up with an interactive protocol by which users with no quantum powers of their own can nevertheless employ cryptography to put a harness on a quantum computer and drive it wherever they want, with the certainty that the quantum computer is following their orders. Mahadev’s approach gives the user “leverage that the computer just can’t shake off.” For a graduate student to achieve such a result as a solo effort is “pretty astounding”. Quantum computation researchers are excited not just about what Mahadev’s protocol achieves, but also about the radically new approach she has brought to bear on the problem. Using classical cryptography in the quantum realm is a “truly novel idea. I expect many more results to continue building on these ideas.”
space-time achieves its “intrinsic robustness,” despite being woven out of fragile quantum stuff. “We’re not walking on eggshells to make sure we don’t make the geometry fall apart. I think this connection with quantum error correction is the deepest explanation we have for why that’s the case.”
That rapid improvement has led to what’s being called “Neven’s law,” a new kind of rule to describe how quickly quantum computers are gaining on classical ones. Quantum computers are gaining computational power relative to classical ones at a “doubly exponential” rate — a staggeringly fast clip. With double exponential growth, “it looks like nothing is happening, nothing is happening, and then whoops, suddenly you’re in a different world.”
This is certainly the most extreme of the nerd rapture curves i have seen:
the very near future should be the watershed moment, where quantum computers surpass conventional computers and never look back. Moore’s Law cannot catch up. A year later, it outperforms all computers on Earth combined. Double qubits again the following year, and it outperforms the universe.
In the mid-2000s there was a small diamond mined from the Ural Mountains. Is was called the ‘magic Russian sample. The diamond was extremely pure—almost all carbon, which isn’t common but with a few impurities that gave it strange quantum mechanical properties. Now anyone can go online and buy a $500 quantum-grade diamond for an experiment. The diamonds have nitrogen impurities—but what Schloss’s group needs is a hole right next to it, called a nitrogen vacancy. Russian “magic diamonds” can hold qubits in place and thus act the same way that a trapped-ion rig does. They replace a single carbon atom in a diamond’s atomic lattice with a nitrogen atom and leaving a neighboring lattice node empty, engineers can create what’s called a nitrogen-vacancy (NV) center. This is generally inexpensive since it’s derived from nature.
With no evaluative judgment attached, this is an unprecedented time for quantum computing as a field. Where once faculty applicants struggled to make a case for quantum computing (physics departments: “but isn’t this really CS?” / CS departments: “isn’t it really physics?” / everyone: “couldn’t this whole QC thing, like, all blow over in a year?”), today departments are vying with each other and with industry players and startups to recruit talented people. In such an environment, we’re fortunate to be doing as well as we are. We hope to continue to expand.
2019-07-26: Quantum hardware should make monte carlo methods more powerful & accurate. 2019-08-20: 1 Million Qubits
Fujitsu has a Digital Annealer with 8192 Qubits and a 1M qubit system in the lab. Digital Annealer is a new technology that is used to solve large-scale combinatorial optimization problems instantly. Digital Annealer uses a digital circuit design inspired by quantum phenomena and can solve problems which are difficult and time consuming for classical computers.
Microsoft is developing Majorana-based topological quantum computer qubits which will be higher-quality and lower error rate qubits. A high-quality hybrid system made of InSb nanowires with epitaxial-grown Al shells has revealed ballistic superconductivity and quantized zero-bias conductance peak. This holds great promise for making the long-sought topological quantum qubits.
they have made the simulation of the quantum electrons so fast that it could run extremely long without restrictions and the effect of their motion on the movement of the slow ions would be visible
Transparent crystals with optical nonlinearities could enable quantum computing at room temperature by 2030
2020-12-03: BosonSampling. A second method achieves quantum supremacy.
Do you have any amusing stories? When I refereed the Science paper, I asked why the authors directly verified the results of their experiment only for up to 26-30 photons, relying on plausible extrapolations beyond that. While directly verifying the results of n-photon BosonSampling takes ~2n time for any known classical algorithm, I said, surely it should be possible with existing computers to go up to n=40 or n=50? A couple weeks later, the authors responded, saying that they’d now verified their results up to n=40, but it burned $400000 worth of supercomputer time so they decided to stop there. This was by far the most expensive referee report I ever wrote!
2021-12-06: Quantum Computing Overview. A really good overview of the field of quantum computing with a clear explanation of how they work, why people are excited about quantum algorithms and their value, the potential applications of quantum computers including quantum simulation, artificial intelligence and more, and the different models and physical implementations people are using to build quantum computers like superconducting devices, quantum dots, trapped ions, photons or neutral atoms, and the challenges they face.
Here we report the measurement of logical qubit performance scaling across several code sizes, and demonstrate that our system of superconducting qubits has sufficient performance to overcome the additional errors from increasing qubit number. We find that our distance-5 surface code logical qubit modestly outperforms an ensemble of distance-3 logical qubits on average, in terms of both logical error probability over 25 cycles and logical error per cycle ((2.914 ± 0.016)% compared to (3.028 ± 0.023)%). To investigate damaging, low-probability error sources, we run a distance-25 repetition code and observe a 1.7 × 10−6 logical error per cycle floor set by a single high-energy event (1.6 × 10−7 excluding this event). We accurately model our experiment, extracting error budgets that highlight the biggest challenges for future systems. These results mark an experimental demonstration in which quantum error correction begins to improve performance with increasing qubit number, illuminating the path to reaching the logical error rates required for computation.
2023-06-19: It might be possible to work around noise, making quantum computing practical.
IBM physicist Abhinav Kandala conducted precise measurements of the noise in each of their qubits, which can follow relatively predictable patterns determined by their position inside the device, microscopic imperfections in their fabrication and other factors. Using this knowledge, the researchers extrapolated back to what their measurements — in this case, of the full state of magnetization of a 2D solid — would look like in the absence of noise. They were then able to run calculations involving all of Eagle’s 127 qubits and up to 60 processing steps — more than any other reported quantum-computing experiment. The results validate IBM’s short-term strategy, which aims to provide useful computing by mitigating, as opposed to correcting, errors. Over the longer term, IBM and most other companies hope to shift towards quantum error correction, a technique that will require large numbers of additional qubits for each data qubit.
I tested the new smartphones last night. I had to check into this site, of course, and it rendered not bad at all. I’m inclined to do a simpler, one column layout without the sidebars if I ever decide to buy one of those. I still think their GUI sucks ass (its based on tiny buttons with illogical function mapping). I’d much rather have a phone with a touch screen, but they are very bulky. So I guess I will skip this generation of mobiles. The picture is almost original size. It’s highly alarming that every mobile device is demoed with sports scores. Who gives a fuck about sports scores?
2002-11-05: Xaraya is now public. Initial reactions are very positive. I’m glad we didn’t do any forums, because forums attract scum. In other news I did an interview with internet intern, a german mass market internet rag (some 400k circulation). I talked about the reasons behind Xaraya and why Xaraya will succeed where other php cms will fail:
skilled developers
a real architecture
no incompetent advocates
The article should be up in 2 weeks. 2002-11-09: I opened a whoopass can’o’worms when I outlined my plans to implement workflow for Xaraya early next year. I want to start very simple (actually the work would be done for a client project) because I knew from preparing the web services talks that workflows are a very complex topic. Gary suggested I look into wfmc which is the industry standard for workflows. Very nice, but I guess implementing it would keep me busy for a year. Workflows might be a topic for oscom too. Both wyona and zope already implement some support for xml-defined workflows. 2002-11-28: I created my first WSDL file today, with the help of some tools. I’m pretty sure my WSDL is invalid. Scripting languages with their weak type systems and WSDL don’t exactly mix well. I hope to eventually enable web services to call into the APIs that Xaraya offers. At this stage, it is merely a nice idea, but I’m slowly making progress. 2003-01-25: I took the plunge, and am now running a current Xaraya snapshot again. Lots of new toys to play with 🙂
Yeah I know I have been slow with updates, but a) live has been hectic b) Xaraya is not yet very convenient for blogging.
On the bright side, comments should now be fully functional, with a nice tree view. 2003-02-12: Another one joins the MT love. Marcel is a buddy from the Xaraya PMC, and the 2 of us should really be using our own dogfood, but alas it is not there yet re: blogging comfort. One day soon, though. 2003-02-12: Xaraya Usability Recommendations is probably one of the more extensive studies about usability in the open source field. And we are not even at 1.0 yet. Kudos to Doug and Drew for this fine doc. If we follow through with this one, good things are in store for the web layman. 2003-02-21: This feed validates as RSS. I took the plunge, and fixed the RSS feed for Xaraya. Unlike postnuke, Xaraya will ship with a rich feed that makes use of the 2.0 format. We now also have SOAP support. Mike pushed a changeset that enables to call Xaraya API methods over SOAP. Here is the relevant part from the WSDL.
<wsdl:arrayType=”xsd:string[]” />
<xsd:complexType name=”wsModAPIFuncRequest”>
<xsd:sequence>
<xsd:element name=”module” type=”xsd:string” />
<xsd:element name=”func” type=”xsd:string” />
<xsd:element name=”type” type=”xsd:string” />
<xsd:element name=”args” type=”xsd:xsdl:myelement0″ />
</xsd:sequence>
</xsd:complexType>
2003-02-27: Aye, Kevin, we will work extra hard on usability.
The PN user registration process is plain silly. It is one of the things that I was hoping to see the last of in eventually moving to Xaraya. I have a suggestion for anyone working on the end-user (non-admin) aspects of the core modules. Pretend that the typical user is my mother who gets very flustered when web site processes aren’t as easy and straightforward as possible. And Mom cries when she gets flustered. Please folks … don’t make my Mom cry.
It is my pleasure to announce the first Beta Release of Xaraya (.900). This release is the culmination of nearly a year’s worth of hard work and undying dedication to creating. All of the developers on the project have devoted many hours to reach where we are today. The first Beta release for Xaraya is intended to capture a baseline of what needs to be accomplished before the final release. This is merely another step in the long journey that began with PHP Nuke, and then PostNuke for many of us.
With the articles system combined with the dynamic data system (both written by Michel), a webmaster no longer has to wait for developers to dream up new modules. All a webmaster has to do is dream up what they want to display, and from there it’s just a matter of adding 2 templates into the system and creating a new publication type to gather the data.
I just installed Xaraya again after neglecting it for a while, and I must say, very impressive. Time to mop up the nuke market with their silly systems. 2003-06-30: Looks like everyone and his dog is converging on XML pipeline processors these days. With more powerful XSLT editors, maybe the time for these technologies to appeal to a more mainstream audience has come. 2003-07-02:
KAYWA
No anagrams found.
WYONA
AN YOW
NOWAY
NAY OW
ANY OW
YAW ON
YAW NO
WAY ON
WAY NO
XARAYA
A RAY AX
LENYA
LAY NE
LAY EN
AN LEY
AN ELY
AN LYE
ANY EL
NAY EL
2003-07-20: Xaraya goes new ways again. They now use phing, a php clone of ant, to maintain build files for the distribution. Very neat. The more standardization, the better. Apache Lenya is using ant more and more for various scripting tasks too. This nicely leverages the very good ant documentation and literature, and means you have to learn fewer concepts. 2003-08-02: Xaraya is now bitkeeper project #6 by number of change sets. 2003-11-04:
Trolling through the Bitkeeper tree on the site, looking at the change sets, the different names, the comments, it just sort of occurred to me this is beginning to look like a factory, chugging merrily along I’m noticing more and more people on the public mailing lists wanting to take the bk plunge. The collective consciousness has apparently reached a critical mass conclusion and internalized that this is the normal way of life around here. Quick rewind to 8 months ago, when most everyone (me especially) were still trying to figure out how to do a merge… You’ve come a long way, baby.Marc
I am very happy that we made the decision to establish sound processes, use bitkeeper instead of cvs, and aim for quality. It took longer than the usual crappy php project, but then again it is of much higher quality. We are now the number 3 user of bitkeeper, only surpassed by MySQL and the Linux kernel. Amazing. 2003-12-12: This post led to an avalanche. Wow, 19 months later, the repercussions are still working its way through the php cms community 🙂 2007-06-20: linux.com is now running on Xaraya, a CMS I co-founded
Picture the following scenario: Microsoft has created a weblog tool that is designed to run inside the firewall at a company. It’s browser-accessible from any 4.0 or higher web browser and doesn’t require Windows on the client. It leverages their strengths by integrating with Office, and there’s no per-user client access fee. Then imagine if this weblogging tool were deployed to millions of users, all before anyone in the weblog community took notice. That scenario is real.
anil dash thinks that microsoft is dipping a toe in the weblog market, and i have it confirmed that there are internal blogging tools at microsoft too. this could be a huge boost to the weblog ecosystem, or it could kill it. the current infrastructure does not scale to 10s of millions of authors with ease, and the culture that has formed around weblogs even less. as with other geek toys before, blogs will have to be made ready for newbies, and will lose much of their chic in the process. the efficiency improvements for society will outweigh these downsides (which are only gripes of the elite, anyway)
Just came back from borders where i read most of bad boy balmer. an entertaining read, although not very insightful. take-home messages? maybe that a very visible commitment to a cause can do you good.
it is a daily requirement in his division at microsoft that everyone — himself included — spend an hour a day helping customers, for example by answering questions on newsgroups.
microsoft is trying something new. asp had the downside that there were zero interesting open source apps beyond “hello world”. apparently, this shall not happen with .net, and several initiatives suggest that.
is a free ASP.net IDE that features wysiwyg web forms, web services support and much more.
mono mono is making quick progress. a lot of the core libraries have already been implemented.
rotor rotor is a complete CLI implementation that will help the mono effort.