STS (Secure Time Seeding) uses server time from SSL handshakes, which is fine when talking to other Microsoft servers, but other implementations put random data in that field to prevent fingerprinting.

  • Z4rK@lemmy.world
    link
    fedilink
    English
    arrow-up
    38
    ·
    1 year ago

    This bug has created havocs for me. We had a “last synchronized” time stamp persisted to a DB so that the system was able to robustly deal with server restarts / bootstrapping on new environments.

    The synchronization was used to continuously fetch critical incident and visualize them on a map. The data came through a third party api that broke down if we asked for too much data at a time, so we had to reason about when we fetched data last time, and only ask for new updates since then.

    Each time the synchronization ran, it would persist an updated time stamp to the DB.

    Of course this routine ran just as the server jumped several months into the feature for a few minutes. After this, the last run time stamp was now some time next year. Subsequent runs of the synchronization routine never found any updates as the date range it asked for didn’t really make sense.

    It just ran successfully without finding any new issues. We were quite happy about it. It took months before we figured out we actually had a mayor discrepancy in our visualization map.

    We had plenty of unit tests, integration tests, and system tests. We just didn’t think of having one that checked whether the server had time traveled to the future or not.

    • lolcatnip@reddthat.com
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      1
      ·
      1 year ago

      If I’ve learned one thing from the last decade of movie and TV sci-fi, it’s that you always need to account for the possibility of time travel.

      • Z4rK@lemmy.world
        link
        fedilink
        English
        arrow-up
        13
        ·
        edit-2
        1 year ago

        While the root issue was still unknown, we actually wrote one. It sort of made sense. Check that the date from isn’t later than date to in the generated range used for the synchronization request. Obviously. You never know what some idiot future coder (usually yourself some weeks from now) would do, am I right?

        However, it was far worse to write the code that fulfilled the test. In the very same few lines of code, we fetched the current date from time.now() plus some time span as date.to, fetched the last synchronization timestamp from db as date.from, and then validated that date.from wasn’t greater than date.to, and if so, log an error about it.

        The validation code made no logic sense when looking at it.

        • towerful@programming.dev
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          I feel like the 3rd party API should have had some error checking, although that might have strayed too far into a client’s business logic.
          If it is an API of incidents, that suggests past incidents. And the whole “never trust user data” kinda implies they should throw an error if you request information about a tinerange in the future.
          I guess, not throwing an error does allow the 3rd party to “schedule” an incident in the future, eg planned maintenance/downtime.

          But then, that isn’t separation of concerns. Ideally those endpoint would be separate. One for planned hypothetical incidents and one for historical concrete incidents.

          It’s definitely an odd scenario where you are taking your trusted data (from your systems and your database), then having to validate it.