搜索
  • Re: Bitcoin P2P e-cash paper
    NOVEMBER 17, 2008 SATOSHI NAKAMOTO CRYPTOGRAPHY MAILING LIST
    I’ll try and hurry up and release the sourcecode as soon as possible to serve
    as a reference to help clear up all these implementation questions.

    Ray Dillinger (Bear) wrote:
    > When a coin is spent, the buyer and seller digitally sign a (blinded)
    > transaction record.

    Only the buyer signs, and there’s no blinding.

    > If someone double spends, then the transaction record
    > can be unblinded revealing the identity of the cheater.

    Identities are not used, and there’s no reliance on recourse. It’s all
    prevention.

    > This is done via a fairly standard cut-and-choose
    > algorithm where the buyer responds to several challenges
    > with secret shares

    No challenges or secret shares. A basic transaction is just what you see in
    the figure in section 2. A signature (of the buyer) satisfying the public key
    of the previous transaction, and a new public key (of the seller) that must be
    satisfied to spend it the next time.

    > They may also receive chains as long as the one they’re trying to
    > extend while they work, in which the last few “links” are links
    > that are *not* in common with the chain on which they’re working.
    > These they ignore.

    Right, if it’s equal in length, ties are broken by keeping the earliest one
    received.

    > If it contains a double spend, then they create a “transaction”
    > which is a proof of double spending, add it to their pool A,
    > broadcast it, and continue work.

    There’s no need for reporting of “proof of double spending” like that. If the
    same chain contains both spends, then the block is invalid and rejected.

    Same if a block didn’t have enough proof-of-work. That block is invalid and
    rejected. There’s no need to circulate a report about it. Every node could
    see that and reject it before relaying it.

    If there are two competing chains, each containing a different version of the
    same transaction, with one trying to give money to one person and the other
    trying to give the same money to someone else, resolving which of the spends is
    valid is what the whole proof-of-work chain is about.

    We’re not “on the lookout” for double spends to sound the alarm and catch the
    cheater. We merely adjudicate which one of the spends is valid. Receivers of
    transactions must wait a few blocks to make sure that resolution has had time
    to complete. Would be cheaters can try and simultaneously double-spend all
    they want, and all they accomplish is that within a few blocks, one of the
    spends becomes valid and the others become invalid. Any later double-spends
    are immediately rejected once there’s already a spend in the main chain.

    Even if an earlier spend wasn’t in the chain yet, if it was already in all the
    nodes’ pools, then the second spend would be turned away by all those nodes
    that already have the first spend.

    > If the new chain is accepted, then they give up on adding their
    > current link, dump all the transactions from pool L back into pool
    > A (along with transactions they’ve received or created since
    > starting work), eliminate from pool A those transaction records
    > which are already part of a link in the new chain, and start work
    > again trying to extend the new chain.

    Right. They also refresh whenever a new transaction comes in, so L pretty much
    contains everything in A all the time.

    > CPU-intensive digital signature algorithm to
    > sign the chain including the new block L.

    It’s a Hashcash style SHA-256 proof-of-work (partial pre-image of zero), not a
    signature.

    > Is there a mechanism to make sure that the “chain” does not consist
    > solely of links added by just the 3 or 4 fastest nodes? ‘Cause a
    > broadcast transaction record could easily miss those 3 or 4 nodes
    > and if it does, and those nodes continue to dominate the chain, the
    > transaction might never get added.

    If you’re thinking of it as a CPU-intensive digital signing, then you may be
    thinking of a race to finish a long operation first and the fastest always
    winning.

    The proof-of-work is a Hashcash style SHA-256 collision finding. It’s a
    memoryless process where you do millions of hashes a second, with a small
    chance of finding one each time. The 3 or 4 fastest nodes’ dominance would
    only be proportional to their share of the total CPU power. Anyone’s chance of
    finding a solution at any time is proportional to their CPU power.

    There will be transaction fees, so nodes will have an incentive to receive and
    include all the transactions they can. Nodes will eventually be compensated by
    transaction fees alone when the total coins created hits the pre-determined
    ceiling.

    > Also, the work requirement for adding a link to the chain should
    > vary (again exponentially) with the number of links added to that
    > chain in the previous week, causing the rate of coin generation
    > (and therefore inflation) to be strictly controlled.

    Right.

    > You need coin aggregation for this to scale. There needs to be
    > a “provable” transaction where someone retires ten single coins
    > and creates a new coin with denomination ten, etc.

    Every transaction is one of these. Section 9, Combining and Splitting Value.
    Satoshi Nakamoto
    0 0 评论 0 股票
    请登录喜欢,分享和评论!
  • Re: Bitcoin P2P e-cash paper
    NOVEMBER 17, 2008 SATOSHI NAKAMOTO CRYPTOGRAPHY MAILING LIST
    James A. Donald wrote:
    > > Fortunately, it’s only necessary to keep a
    > > pending-transaction pool for the current best branch.
    >
    > This requires that we know, that is to say an honest
    > well behaved peer whose communications and data storage
    > is working well knows, what the current best branch is –

    I mean a node only needs the pending-tx pool for the best branch it
    has. The branch that it currently thinks is the best branch.
    That’s the branch it’ll be trying to make a block out of, which is
    all it needs the pool for.

    > > Broadcasts will probably be almost completely
    > > reliable.
    >
    > Rather than assuming that each message arrives at least
    > once, we have to make a mechanism such that the
    > information arrives even though conveyed by messages
    > that frequently fail to arrive.

    I think I’ve got the peer networking broadcast mechanism covered.

    Each node sends its neighbours an inventory list of hashes of the
    new blocks and transactions it has. The neighbours request the
    items they don’t have yet. If the item never comes through after a
    timeout, they request it from another neighbour that had it. Since
    all or most of the neighbours should eventually have each item,
    even if the coms get fumbled up with one, they can get it from any
    of the others, trying one at a time.

    The inventory-request-data scheme introduces a little latency, but
    it ultimately helps speed more by keeping extra data blocks off the
    transmit queues and conserving bandwidth.

    > You have an outline
    > and proposal for such a design, which is a big step
    > forward, but the devil is in the little details.

    I believe I’ve worked through all those little details over the
    last year and a half while coding it, and there were a lot of them.
    The functional details are not covered in the paper, but the
    sourcecode is coming soon. I sent you the main files.
    (available by request at the moment, full release soon)

    Satoshi Nakamoto
    0 0 评论 0 股票
    请登录喜欢,分享和评论!

没有结果显示

没有结果显示

没有结果显示

没有结果显示

Google Analytics