[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Separate trusted computing designs
From: |
Marcus Brinkmann |
Subject: |
Re: Separate trusted computing designs |
Date: |
Tue, 29 Aug 2006 13:00:25 +0200 |
User-agent: |
Wanderlust/2.14.0 (Africa) SEMI/1.14.6 (Maruoka) FLIM/1.14.7 (Sanjō) APEL/10.6 Emacs/21.4 (i486-pc-linux-gnu) MULE/5.0 (SAKAKI) |
At Tue, 29 Aug 2006 10:41:22 +0200,
Christian Stüble <address@hidden> wrote:
> Am Donnerstag, 17. August 2006 09:18 schrieb Marcus Brinkmann:
>
> >
> > > To prevent misunderstandings: I don't want to promote TC, nor do I
> > > like its
> > > technical instantiation completely. IMO there are a lot of technical
> > > and
> > > social issues to be corrected; that's the reason why I am working on
> > > this
> > > topic. Nevertheless, a lot of intelligent researchers have worked on
> > > it, and
> > > therefore it makes IMO sense to analyse what can be done with this
> > > technology. In fact, who else should do this?
> >
> > If the technology is fundamentally flawed, then the correct answer is
> > "nobody", and instead it should be rejected outright.
> IMO not. Maybe this is an influence of my PhD-advisor(s), but I would try to
> _prove_ that the technology is fundamentally flawed. BTW, the abstract
> security properties provides are IMO useful.
Well, I believe that my arguments, as disclosed in the email I
referenced, comes as close to a proof as you can get in a discussion
about social issues, where not much is known with scientific
confidence.
> > > You are asking a lot of questions that I cannot answer, because they
> > > are
> > > the well known "open issues". The challenge is to be able to answer
> > > them
> > > sometimes...
> >
> > If they are open issues, where does the confidence come from your research
> > group that they not only can be solved, but in fact that they are solved in
> > your design? From the EMSCB home page (under "Benefits"):
> I am sure we cannot solve all problems. However, some of the problems already
> have been solved (e.g., on design requirement is to ensure that application
> cannot violate the user's security policy, as descibed by Ross). But I don't
> want to discuss about one of our projects, nor about the group itself. Only
> about the given privacy use case.
If you know that you cannot solve all problems, why does your group
claim repeatedly on your web page that you have, indeed, found a
solution that "guarantees a balance among interests"?
> > I asked for use cases that have a clear benefit for the public as a whole
> > or the free software community.
> I personally would like to be able to enforce my privacy rules even on
> platforms that have another owner.
If you can enforce a property about a system, then it is not owned
exclusively by another party. That's a contradiction in terms.
What you can do is to engage in a contract with somebody else, where
this other party will, for the purpose of the contract (ie, the
implementation of a common will), alienate his ownership of the
machine so that it can be used for the duration and purpose of the
contract. The contract may have provisions that guarantee your
privacy for the use of it.
But, the crucial issue is that for the duration the contract is
engaged under such terms, the other party will *not* be the owner of
the machine.
> > > If there
> > > are two
> > > comparable open operating systems - one providing these features and
> > > one that
> > > does not, I would select the one that does. I do not want to discuss
> > > the
> > > opinion of the government or the industry. And I don't want to
> > > discuss
> > > whether people are intelligent enough to use privacy-protecting
> > > features or
> > > not. If other people do not want to use them, they don't have to. My
> > > requirement is that they have the chance to decide (explicitly or by
> > > defining, or using a predefined, privacy policy enforced by the
> > > system).
> >
> > I am always impressed how easily some fall to the fallacy that the use of
> > this technology is voluntarily for the people. It is not. First, the use
> > of the technology will be required to access the content. And people will
> > need to access the content, to be able to participate in our culture and
> > society. All the major cultural distribution channels are completely owned
> > by the big industry, exactly because this allows these industries to have a
> > grip-hold over our culture. There is an option for popular struggle
> > against this, but it will require a huge effort, and success is by no means
> > guaranteed.
> I did not talk about TC in general, but about the "privacy-protecting agent".
I am not sure what you mean by that term. The crucial point here is
that TC removes the choice from the people which software to run.
> > In the end, this technology, if it succeeds, will be pushed down people's
> > throat. Everybody seems to know and admit this except the "intelligent
> > researchers" (well, and the marketing departments of the big corporations).
> Do you think that the open-source community will implement such features?
They may, either because they support them or because there is a
tactical reason to do so. I am not part of the open-source community,
I am part of the free software community, and there the only reason to
support such features would be tactical to not give ground back to
proprietary software vendors. However, it seems that the free
software community forcefully rejects this technology, and implements
various means to protect itself from these developments. The
Defective By Design movement does political activism against it, and
the GPL v3 will contain provisions that prohibit the use of free
software on restricted systems.
> Do you expect that nobody will use open-source, if the industry will
> implement
> such features?
I fully expect them to do so. In fact, they are already doing it.
This is why the GPLv3 will contain provisions against it.
> Do you expect that the european governments will use software
> violating privacy laws if there is a better and secure alternative?
Privacy laws are not violated by software, they are violated by people.
The decision which software to use for any given project will
(hopefully) be guided by many factors, including questions of
protection, but also including questions of access to data.
> > > This is (except of the elementary security properties provided by the
> > > underlying virtualization layer, e.g., a microkernel) an
> > > implementation
> > > detail of the appropriate service. There may be implementations
> > > enforcing
> > > strong isolation between compartments and others that do not. That't
> > > basic
> > > idea behind our high-level design how to provide multilateral
> > > security: The
> > > system enforces the user-defined security policy with one exception:
> > > Applications can decide themselves whether they want to continue
> > > execution
> > > based on the (integer) information they get (e.g., whether the GUI
> > > enforces
> > > isolation or not). But this requires that users cannot access the
> > > applications's internal state.
> >
> > That's incompatible with my ideas on user freedom and protection the user
> > from the malicious influences of applications.
> I know. But this is IMO a basic requirement to be able to provide some kind
> of
> multilateral security. A negotiation of policies 'before' the application is
> executed.
It's not a requirement to provide multilateral security, it is only a
requirement for an attempt to enforce multilateral security by
technological means. Issues of multilateral security exists since the
first time people engaged into contracts with each other.
The problem with negotiation of policies is that balanced policies as
they exist in our society are not representable in a computer, and
that the distribution of power today will often do away with
negotiation altogether.
I think it is very important to understand what "balanced policies"
means in our society. For example, if an employer asks in a job
interview if the job applicant is pregnant or wants a child in the
near future, the applicant is allowed to consciously lie. Similarly,
shrink-wrap licenses often contain unenforcable provisions. However,
one does not need to negotiate the provisions, one can simply "accept"
them and then violate them without violating the law. Our social
structure allows for bending of the rules in all sort of places,
including situations which involve an imbalance of power (as the above
examples), emergencies, customary law, and cases where simply no one
cares.
Thus, it is completely illusorical to expect that a balanced policy
can be defined in the terms of a computing machine, and that it is the
result of _prior_ negotiation. Life is much more complicated than
that. Thus, "Trusted computing" and the assumptions underlying its
security model are a large-scale assault on our social fabric if
deployed in a socially significant scope.
> > It is also incompatible with the free software principles.
> What exactly is in your opinion incompatible with the free software
> principles?
From the current GPLv3 draft:
"Some computers are designed to deny users access to install or run
modified versions of the software inside them. This is fundamentally
incompatible with the purpose of the GPL, which is to protect users'
freedom to change the software. Therefore, the GPL ensures that the
software it covers will not be restricted in this way."
The views of the FSF on DRM and TC are well-published, and easily
available. For example, search for "TiVo-ization".
What is incompatible with the free software principles is exactly
this: I am only free to run the software that I want to run, with my
modifications, if the hardware obeys my command. If the hardware puts
somebody else's security policy over mine, I lose my freedom to run
modified versions of the software. This is why any attempt to enforce
the security policy of the author or distributor of a free software
work is in direct conflict with the rights given by the free software
license to the recipient of the work.
Thanks,
Marcus
- Challenge: Confinement, Christian Stüble, 2006/08/15
- Re: Challenge: Confinement, Marcus Brinkmann, 2006/08/15
- Re: Challenge: Confinement, Marcus Brinkmann, 2006/08/16
- Separate trusted computing designs, Christian Stüble, 2006/08/16
- Re: Separate trusted computing designs, Marcus Brinkmann, 2006/08/17
- Re: Separate trusted computing designs, Christian Stüble, 2006/08/29
- When to Deploy, Neal H. Walfield, 2006/08/30
- Re: Separate trusted computing designs,
Marcus Brinkmann <=
- Re: Separate trusted computing designs, Christian Stüble, 2006/08/30
- Re: Separate trusted computing designs, Michal Suchanek, 2006/08/30
- Re: Separate trusted computing designs, Christian Stüble, 2006/08/30
- Re: Separate trusted computing designs, Michal Suchanek, 2006/08/31
- Re: Separate trusted computing designs, Marcus Brinkmann, 2006/08/31
- Re: Separate trusted computing designs, Christian Stüble, 2006/08/31
- Re: Separate trusted computing designs, Marcus Brinkmann, 2006/08/31
- Re: Separate trusted computing designs, Marcus Brinkmann, 2006/08/31
- Re: Separate trusted computing designs, Christian Stüble, 2006/08/31
- Re: Separate trusted computing designs, Tom Bachmann, 2006/08/31