help-gnu-emacs
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Fascinating interview by Richard Stallman at KTH on emacs history an


From: Mark Tarver
Subject: Re: Fascinating interview by Richard Stallman at KTH on emacs history and internals
Date: Wed, 08 Dec 2010 15:19:01 -0000
User-agent: G2/1.0

On 15 July, 23:21, bolega <gnuist...@gmail.com> wrote:
> http://www.gnu.org/philosophy/stallman-kth.html
>
> RMS lecture at KTH (Sweden), 30 October 1986
>
> (Kungliga Tekniska Högskolan (Royal Institute of Technology))
> Stockholm, Sweden
>
> Arranged by the student society
> “Datorföreningen Stacken”
> 30 October 1986
>
> [Note: This is a slightly edited transcript of the talk. As such it
> contains false starts, as well as locutions that are natural in spoken
> English but look strange in print. It is not clear how to correct them
> to written English style without ‘doing violence to the original
> speech’.]
>
> It seems that there are three things that people would like me to talk
> about. On the one hand I thought that the best thing to talk about
> here for a club of hackers, was what it was like at the MIT in the old
> days. What made the Artificial Intelligence Lab such a special place.
> But people tell me also that since these are totally different people
> from the ones who were at the conference Monday and Tuesday that I
> ought to talk about what's going on in the GNU project and that I
> should talk about why software and information can not be owned, which
> means three talks in all, and since two of those subjects each took an
> hour it means we're in for a rather long time. So I had the idea that
> perhaps I could split it in to three parts, and people could go
> outside for the parts they are not interested in, and that then when I
> come to the end of a part I can say it's the end and people can go out
> and I can send Jan Rynning out to bring in the other people. (Someone
> else says: “Janne, han trenger ingen mike” (translation: “Janne, he
> doesn't need a mike”)). Jan, are you prepared to go running out to
> fetch the other people? Jmr: I am looking for a microphone, and
> someone tells me it is inside this locked box. Rms: Now in the old
> days at the AI lab we would have taken a sledgehammer and cracked it
> open, and the broken door would be a lesson to whoever had dared to
> lock up something that people needed to use. Luckily however I used to
> study Bulgarian singing, so I have no trouble managing without a
> microphone.
>
> Anyway, should I set up this system to notify you about the parts of
> the talk, or do you just like to sit through all of it? (Answer:
> Yeaaah)
>
> When I started programming, it was 1969, and I did it in an IBM
> laboratory in New York. After that I went to a school with a computer
> science department that was probably like most of them. There were
> some professors that were in charge of what was supposed to be done,
> and there were people who decided who could use what. There was a
> shortage of terminals for most people, but a lot of the professors had
> terminals of their own in their offices, which was wasteful, but
> typical of their attitude. When I visited the Artificial Intelligence
> lab at MIT I found a spirit that was refreshingly different from that.
> For example: there, the terminals was thought of as belonging to
> everyone, and professors locked them up in their offices on pain of
> finding their doors broken down. I was actually shown a cart with a
> big block of iron on it, that had been used to break down the door of
> one professors office, when he had the gall to lock up a terminal.
> There were very few terminals in those days, there was probably
> something like five display terminals for the system, so if one of
> them was locked up, it was a considerable disaster.
>
> In the years that followed I was inspired by that ideas, and many
> times I would climb over ceilings or underneath floors to unlock rooms
> that had machines in them that people needed to use, and I would
> usually leave behind a note explaining to the people that they
> shouldn't be so selfish as to lock the door. The people who locked the
> door were basically considering only themselves. They had a reason of
> course, there was something they thought might get stolen and they
> wanted to lock it up, but they didn't care about the other people they
> were affecting by locking up other things in the same room. Almost
> every time this happened, once I brought it to their attention, that
> it was not up to them alone whether that room should be locked, they
> were able to find a compromise solution: some other place to put the
> things they were worried about, a desk they could lock, another little
> room. But the point is that people usually don't bother to think about
> that. They have the idea: “This room is Mine, I can lock it, to hell
> with everyone else”, and that is exactly the spirit that we must teach
> them not to have.
>
> But this spirit of unlocking doors wasn't an isolated thing, it was
> part of an entire way of life. The hackers at the AI lab were really
> enthusiastic about writing good programs, and interesting programs.
> And it was because they were so eager to get more work done, that they
> wouldn't put up with having the terminals locked up, or lots of other
> things that people could do to obstruct useful work. The differences
> between people with high morale who really care about what they're
> trying to do, and people who think of it as just a job. If it's just a
> job, who cares if the people who hired you are so stupid they make you
> sit and wait, it's their time, their money but not much gets done in a
> place like that, and it's no fun to be in a place like that.
>
> Another thing that we didn't have at the AI lab was file protection.
> There was no security at all on the computer. And we very consciously
> wanted it that way. The hackers who wrote the Incompatible Timesharing
> System decided that file protection was usually used by a self-styled
> system manager to get power over everyone else. They didn't want
> anyone to be able to get power over them that way, so they didn't
> implement that kind of a feature. The result was, that whenever
> something in the system was broken, you could always fix it. You never
> had to sit there in frustration because there was NO WAY, because you
> knew exactly what's wrong, and somebody had decided they didn't trust
> you to do it. You don't have to give up and go home, waiting for
> someone to come in in the morning and fix the system when you know ten
> times as well as he does what needs to be done.
>
> And we didn't let any professors or bosses decide what work was going
> to be done either, because our job was to improve the system! We
> talked to the users of course; if you don't do that you can't tell
> what's needed. But after doing that, we were the ones best able to see
> what kind of improvements were feasible, and we were always talking to
> each other about how we'd like to see the system changed, and what
> sort of neat ideas we'd seen in other systems and might be able to
> use. So the result is that we had a smoothly functioning anarchy, and
> after my experience there, I'm convinced that that is the best way for
> people to live.
>
> Unfortunately the AI lab in that form was destroyed. For many years we
> were afraid the AI lab would be destroyed by another lab at MIT, the
> Lab for Computer Science, whose director was a sort of empire builder
> type, doing everything he could to get himself promoted within MIT,
> and make his organization bigger, and he kept trying to cause the AI
> lab to be made a part of his lab, and nobody wanted to do things his
> way because he believed that people should obey orders and things like
> that.
>
> But that danger we managed to defend against, only to be destroyed by
> something we had never anticipated, and that was commercialism. Around
> the early 80's the hackers suddenly found that there was now
> commercial interest in what they were doing. It was possible to get
> rich by working at a private company. All that was necessary was to
> stop sharing their work with the rest of the world and destroy the MIT-
> AI lab, and this is what they did despite all the efforts I could make
> to prevent them.
>
> Essentially all the competent programmers except for me, at the AI lab
> were hired away, and this caused more than a momentary change, it
> caused a permanent transformation because it broke the continuity of
> the culture of hackers. New hackers were always attracted by the old
> hackers; there were the most fun computers and the people doing the
> most interesting things, and also a spirit which was a great deal of
> fun to be part of. Once these things were gone, there is nothing to
> recommend the place to anyone new, so new people stopped arriving.
> There was no-one they could be inspired by, no-one that they could
> learn those traditions from. In addition no-one to learn how to do
> good programming from. With just a bunch of professors and graduate
> students, who really don't know how to make a program work, you can't
> learn to make good programs work. So the MIT AI lab that I loved is
> gone and after a couple of years of fighting against the people who
> did it to try to punish them for it I decided that I should dedicate
> my self to try to create a new community with that spirit.
>
> But one of the problems I had to face was the problem of proprietary
> software. For example one thing that happened at the lab, after the
> hackers left, was that the machines and the software that we had
> developed could no longer be maintained. The software of course
> worked, and it continued to work if nobody changed it, but the
> machines did not. The machines would break and there would be no-one
> who could fix them and eventually they would be thrown out. In the old
> days, yes we had service contracts for the machines, but it was
> essentially a joke. That was a way of getting parts after the expert
> hackers from the AI lab fixed the problem. Because if you let the
> field-service person fix it it would take them days, and you didn't
> want to do that, you wanted it to work. So, the people who knew how to
> do those things would just go and fix it quickly, and since they were
> ten times as competent as any field service person, they could do a
> much better job. And then they would have the ruined boards, they
> would just leave them there and tell the field service person “take
> these back and bring us some new ones”.
>
> In the real old days our hackers used to modify the
>
> read more »...

Perhaps as an antidote

http://danweinreb.org/blog/rebuttal-to-stallmans-story-about-the-formation-of-symbolics-and-lmi

Mark


reply via email to

[Prev in Thread] Current Thread [Next in Thread]