• Vincent@feddit.nl
    link
    fedilink
    arrow-up
    30
    ·
    edit-2
    3 months ago

    This is fairly standard survey design, I believe. They’re not looking to know which features are wanted in general; they want to know their relative popularity. The sets you’re presented are randomised (i.e. we don’t all get to see the same sets), which allows them to get a ranked list of lots of potential features, while only having to run ten survey questions per participant.

    If you get a set with three features that everyone likes or dislikes at about the same level, then it doesn’t really matter want you answer: they’ll all end up at the top or bottom of the list, respectively. Because each of those options also get presented as part of different sets to different users, where different answers can win out.

    • prongs@lemm.ee
      link
      fedilink
      arrow-up
      14
      ·
      3 months ago

      You’re bang on. It’s called MaxDiff. I use it frequently in my line of work to prioritise product or service messaging with panel data. It’s better in some cases to use Inferred preference rather than stated, but generally good to keep the options comparable in “size” of offer.

      I would never interpret a MaxDiff model low end result as “wow, 5% of people want slower browsers.” Instead I’m focusing on the top cluster. As with any model, they’re only ever so accurate. Don’t read into the questions too much.

    • thingsiplay@beehaw.org
      link
      fedilink
      arrow-up
      8
      arrow-down
      1
      ·
      3 months ago

      The problem with this design is, if people do not care, then they will give random answers, if they don’t have the option to not care. Also this would be important information for Mozilla too, if many people do not care about a specific question. So I feel like they should have done that. But, who am I…

      • Vincent@feddit.nl
        link
        fedilink
        arrow-up
        2
        ·
        3 months ago

        Presumably if people don’t care, they don’t fill in the survey. But as an extra failsafe, they’ve also included the feature “twice as slow as your current browser”. If you rank that high, then your result can probably be discarded.

        But yeah, this design has worked well for many other surveys, so presumably it’ll work well for this one. They’re the experts :)

        • thingsiplay@beehaw.org
          link
          fedilink
          arrow-up
          3
          ·
          3 months ago

          Presumably if people don’t care, they don’t fill in the survey.

          That’s not what I said. People care about the survey and they do a favor to Mozilla with it. And if a question does not have the answer they want to give, then it becomes a problem. It’s a different scenario than what you were saying.

          But yeah, this design has worked well for many other surveys, so presumably it’ll work well for this one. They’re the experts :)

          With that attitude and without acknowledging a problem, it won’t get better. If they were the experts, then they wouldn’t need a survey. But its easy to discredit any credit with that dumb argument.

          • Vincent@feddit.nl
            link
            fedilink
            arrow-up
            1
            ·
            3 months ago

            They’re the experts in survey-taking, not in knowing what the users want - the users are experts in that. Hence the survey.

            That remark was basically a reformulation of and agreeing with your “But, who am I…”

    • sugar_in_your_tea@sh.itjust.works
      link
      fedilink
      arrow-up
      5
      ·
      3 months ago

      Why not just get one big list with like 4 answers:

      • really want
      • want
      • meh
      • don’t want

      How is that worse than getting like 10 screens of relative answers?

      • Vincent@feddit.nl
        link
        fedilink
        arrow-up
        2
        ·
        3 months ago

        Because you’ll end up with ten features that all have overwhelmingly “really want” and “want” answers, and then you still don’t know which of those ten to work on first.

          • Vincent@feddit.nl
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            3 months ago

            Sorry, I wasn’t talking about your answers specifically, but about aggregate results. (Also note that I think you might not get presented with all possible features when taking a single survey.)

            The point is not to find the features that people would like, but the features that people would like most.

            Additionally, this allows you to find a few features that have particularly high value for a subset of users, even though on average they’re not that interesting. (I think Multi-Account Containers are a good example of that: too much of a hassle for many, but for some people, like me, a reason to never switch away from Firefox.)

            • sugar_in_your_tea@sh.itjust.works
              link
              fedilink
              arrow-up
              1
              ·
              3 months ago

              Then perhaps allow them to pick the top 5 or so, and rank them, and then maybe up to 5 that they don’t care about. I’m pretty meh toward a lot of those, and I imagine others are as well.

    • Possibly linux
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      3 months ago

      It doesn’t seem randomized based on what I have seen

      • Vincent@feddit.nl
        link
        fedilink
        arrow-up
        1
        ·
        3 months ago

        You mean you’ve taken it multiple times and kept seeing the exact same ten sets?

    • [email protected]@phuu.uk
      link
      fedilink
      arrow-up
      4
      arrow-down
      6
      ·
      edit-2
      3 months ago

      @Vincent couldn’t finish the survey purely because of the questions suggesting that I should “want” something.

      Perhaps if they asked the question differently, they’d have gotten a completed survey from me.

      I can’t answer loaded questions.

      The samples they get are meaningless if only people who complete the survey are counted.

      The fact that I couldn’t select none of them and move forward, meant something: Jerk Mozilla off, or don’t.

      I chose not to, and I am a Mozilla user!

      #librewolf

      • blind3rdeye@lemm.ee
        link
        fedilink
        arrow-up
        2
        ·
        3 months ago

        I’m half-way through the survey right now; and rather than continuing, just stalling because I don’t want to rank another set of three options that I don’t care about. Some of the choices already given were like “well, I guess I’ll pick the feature that I’ve at least thought about using once…” but now it’s just a list of 3 things that I don’t want whatsoever. I’m trying to give useful feedback, but I feel like I’m really just giving noise.

        • [email protected]@phuu.uk
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          3 months ago

          @blind3rdeye it’s a load of crap, isn’t it?

          The statisticians may disagree, but they fail to understand that forcing “want” into the situation is not a true reflection of what people care about.

          If they had just tweaked that one word, it wouldn’t be as much of a steaming pile of turds that it is.

          It’s almost like they want people to not finish the survey, so they can have a warped sample.