simon-git: putty (master): Simon Tatham

Commits to Tartarus hosted VCS tartarus-commits at lists.tartarus.org
Sun Oct 7 09:14:22 BST 2018


TL;DR:
  36caf03a Utility routines for iterating over a packet queue.
  2e7ced64 Give BPPs a Frontend, so they can do their own logging.
  4c8c41b7 Support OpenSSH delayed compression without a rekey.

Repository:     https://git.tartarus.org/simon/putty.git
On the web:     https://git.tartarus.org/?p=simon/putty.git
Branch updated: master
Committer:      Simon Tatham <anakin at pobox.com>
Date:           2018-10-07 09:14:22

commit 36caf03a5b718f5b2ec6115349bea6cbe322da48
web diff https://git.tartarus.org/?p=simon/putty.git;a=commitdiff;h=36caf03a5b718f5b2ec6115349bea6cbe322da48;hp=0bbe87f11e1fdc6d62a49405018fb2b1e76b88f6
Author: Simon Tatham <anakin at pobox.com>
Date:   Sat Oct 6 18:42:08 2018 +0100

    Utility routines for iterating over a packet queue.
    
    I haven't needed these until now, but I'm about to need to inspect the
    entire contents of a packet queue before deciding whether to process
    the first item on it.
    
    I've changed the single 'vtable method' in packet queues from get(),
    which returned the head of the queue and optionally popped it, to
    after() which does the same bug returns the item after a specified
    tree node. So if you pass the special end node to after(), then it
    behaves like get(), but now you can also use it to retrieve the
    successor of a packet.
    
    (Orthogonality says that you can also _pop_ the successor of a packet
    by calling after() with prev != pq.end and pop == TRUE. I don't have a
    use for that one yet.)

 ssh.h       | 27 +++++++++++++++------------
 sshcommon.c | 14 ++++++++------
 2 files changed, 23 insertions(+), 18 deletions(-)

commit 2e7ced64801514e0f8c72849e281032418d96ef7
web diff https://git.tartarus.org/?p=simon/putty.git;a=commitdiff;h=2e7ced64801514e0f8c72849e281032418d96ef7;hp=36caf03a5b718f5b2ec6115349bea6cbe322da48
Author: Simon Tatham <anakin at pobox.com>
Date:   Sun Oct 7 08:16:44 2018 +0100

    Give BPPs a Frontend, so they can do their own logging.
    
    The sshverstring quasi-frontend is passed a Frontend pointer at setup
    time, so that it can generate Event Log entries containing the local
    and remote version strings and the results of remote bug detection.
    
    I'm promoting that field of sshverstring to a field of the public BPP
    structure, so now all BPPs have the right to talk directly to the
    frontend if they want to. This means I can move all the log messages
    of the form 'Initialised so-and-so cipher/MAC/compression' down into
    the BPPs themselves, where they can live exactly alongside the actual
    initialisation of those primitives.
    
    It also means BPPs will be able to log interesting things they detect
    at any point in the packet stream, which is about to come in useful
    for another purpose.

 ssh.c           |  6 +++---
 ssh.h           |  2 ++
 ssh1bpp.c       | 10 +++++++++-
 ssh1login.c     |  2 --
 ssh2bpp-bare.c  |  3 ++-
 ssh2bpp.c       | 34 +++++++++++++++++++++++++++++++++-
 ssh2transport.c | 27 ---------------------------
 sshbpp.h        |  8 +++++---
 sshverstring.c  | 51 +++++++++++++++++++++++++--------------------------
 9 files changed, 79 insertions(+), 64 deletions(-)

commit 4c8c41b7a0e62c45e9a9b10c9a9fb849d65603a1
web diff https://git.tartarus.org/?p=simon/putty.git;a=commitdiff;h=4c8c41b7a0e62c45e9a9b10c9a9fb849d65603a1;hp=2e7ced64801514e0f8c72849e281032418d96ef7
Author: Simon Tatham <anakin at pobox.com>
Date:   Sun Oct 7 09:10:14 2018 +0100

    Support OpenSSH delayed compression without a rekey.
    
    The problem with OpenSSH delayed compression is that the spec has a
    race condition. Compression is enabled when the server sends
    USERAUTH_SUCCESS. In the server->client direction, that's fine: the
    USERAUTH_SUCCESS packet is not itself compressed, and the next packet
    in the same direction is. But in the client->server direction, this
    specification relies on there being a moment of half-duplex in the
    connection: the client can't send any outgoing packet _after_ whatever
    userauth packet the USERAUTH_SUCCESS was a response to, and _before_
    finding out whether the response is USERAUTH_SUCCESS or something
    else. If it emitted, say, an SSH_MSG_IGNORE or initiated a rekey
    (perhaps due to a timeout), then that might cross in the network with
    USERAUTH_SUCCESS and the server wouldn't be able to know whether to
    treat it as compressed.
    
    My previous solution was to note the presence of delayed compression
    options in the server KEXINIT, but not to negotiate them in the
    initial key exchange. Instead, we conduct the userauth exchange with
    compression="none", and then once userauth has concluded, we trigger
    an immediate rekey in which we do accept delayed compression methods -
    because of course by that time they're no different from the non-
    delayed versions. And that means compression is enabled by the
    bidirectional NEWKEYS exchange, which lacks that race condition.
    
    I think OpenSSH itself gets away with this because its layer structure
    is structure so as to never send any such asynchronous transport-layer
    message in the middle of userauth. Ours is not. But my cunning plan is
    that now that my BPP abstraction includes a queue of packets to be
    sent and a callback that processes that queue on to the output raw
    data bufchain, it's possible to make that callback terminate early, to
    leave any dangerous transport-layer messages unsent while we wait for
    a userauth response.
    
    Specifically: if we've negotiated a delayed compression method and not
    yet seen USERAUTH_SUCCESS, then ssh2_bpp_handle_output will emit all
    packets from its queue up to and including the last one in the
    userauth type-code range, and keep back any further ones. The idea is
    that _if_ that last userauth message was one that might provoke
    USERAUTH_SUCCESS, we don't want to send any difficult things after it;
    if it's not (e.g. it's in the middle of some ongoing userauth process
    like k-i or GSS) then the userauth layer will know that, and will emit
    some further userauth packet on its own initiative which will clue us
    in that it's OK to release everything up to and including that one.
    
    (So in particular it wasn't even necessary to forbid _all_ transport-
    layer packets during userauth. I could have done that by reordering
    the output queue - packets in that queue haven't been assigned their
    sequence numbers yet, so that would have been safe - but it's more
    elegant not to have to.)
    
    One particular case we do have to be careful about is not trying to
    initiate a _rekey_ during userauth, if delayed compression is in the
    offing. That's because when we start rekeying, ssh2transport stops
    sending any higher-layer packets at all, to discourage servers from
    trying to ignore the KEXINIT and press on regardless - you don't get
    your higher-layer replies until you actually respond to the
    lower-layer interrupt. But in this case, if ssh2transport sent a
    KEXINIT, which ssh2bpp kept back in the queue to avoid a delayed
    compression race and would only send if another userauth packet
    followed it, which ssh2transport would never pass on to ssh2bpp's
    output queue, there'd be a complete protocol deadlock. So instead I
    defer any attempt to start a rekey until after userauth finishes
    (using the existing system for starting a deferred rekey at that
    moment, which was previously used for the _old_ delayed-compression
    strategy, and still has to be here anyway for GSSAPI purposes).

 ssh2bpp.c       | 176 ++++++++++++++++++++++++++++++++++++++++++++++++++------
 ssh2transport.c |  93 ++++++++++--------------------
 sshbpp.h        |  15 ++++-
 3 files changed, 203 insertions(+), 81 deletions(-)



More information about the tartarus-commits mailing list