guix-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: How can we decrease the cognitive overhead for contributors?


From: Katherine Cox-Buday
Subject: Re: How can we decrease the cognitive overhead for contributors?
Date: Tue, 5 Sep 2023 12:00:47 -0600
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.13.0

On 9/5/23 8:01 AM, Simon Tournier wrote:
Hi Katherine,

Thank you for your extensive analysis.  I concur.

On Wed, 30 Aug 2023 at 10:11, Katherine Cox-Buday <cox.katherine.e@gmail.com> 
wrote:

3. We should reify a way for Guix, the project, to measure, and track
progress,
     against long-term goals. Particularly when they're social and not
strictly
     technical.

That is the most difficult part, IMHO.  Well, what are the long-term
goals? :-)

I am almost sure we will get various answers depending on people.  Let
say the long-term goals of the Guix project are: Liberating, Dependable
and Hackable.  Then how do you give concrete quantities that we can
measure or track?

I think starting at the top and trying to derive concrete quantities from values is a healthy exercise. I agree that it's difficult, but without doing that it's easy to be left in an echo-chamber where your project isn't actually accomplishing any of the things you'd like it to.

However, my point (3) above is a little different and easier. Here we have a lower-level goal that is a closer to a concrete quantity: "The overhead of contributing to Guix should be low." There are various ways to reduce this further, but they are tightly coupled to the method of measurement. In other words, what you measure will be what you manage, so choose what you measure carefully.

And it is always difficult, if not impossible, to measure or track some
goals that are not technical but social.  For example, how do you
measure being welcoming or being a safe place for all?

I think the easiest way to start, and something that's actually pretty effective, is to start doing annual surveys, e.g.:

- https://discourse.nixos.org/t/2022-nix-survey-results/18983
- https://survey.stackoverflow.co/2023/
- https://tip.golang.org/blog/survey2023-q1-results

This thread turned out to be an informal survey, and I think it's easy to see that some people are happy with how things are, and some people would like to see change.

With a survey you can quantify these opinions and say things like "X% of people would like the current contribution process to remain the same. Y% of those are committers."

This can help reveal larger patterns, and over time, trends.

Do not take me wrong, I strongly think we must think again and again on
that point for improving.  It’s just easier to tackle technical bug. :-)

It's definitely not as easy as tackling a technical bug. I think as engineers we often have a difficult time admitting to ourselves that efficacy is much more than the perfect algorithm or the mechanical shuffling of things around. Do we want Guix to be a fun technical exercise? Or do we want it to fulfill its stated goals? Why not both! :)

Here I see two annoyances:

  1. The number of subcommands and steps.
  2. Each subcommand has a list of options to digest.

Well, CI is helpful here, for sure.  However, it would be helpful to
have a script similar as etc/teams.scm or etc/committer.scm that would
help to run all these steps.

Yes, and commonly whatever you would use for CI is the same thing you would run locally. This is intentional so that you can have some confidence that CI will pass.

It does not mean that all these steps need to be run before each
submission.  However having a tool would help irregular contributors or
newcomers; it would decrease the cognitive overhead i.e., that overhead
would be pushed to some script and it would reinforce confidence.

And then although we've reduced the cognitive overhead globally, we've increased it locally by introducing another conditional: "do I need to run the CI script? how do i know?" The script could probably decide this too.

Now someone™ needs to implement this script. ;-)

Collectively, I don't think we've arrived at a consensus on

1. A list of the issues
2. How we'll measure that they're issues
3. How we'll measure improvement against the issues
4. How we'll address the issues

So often in my long career I've worked with organizations/people that really want to skip to (5): implement something.

Implement a vertically-integrated solution to gather feedback against reality: yes. Jump straight to the "final solution" with all the details managed: no.

To be fair, here you forget one important blocker: having an account to
the forge website.

I do not speak about freedom issues.  Just about the fact to open an
account to the forge website.  For example, let consider this project:

     https://framagit.org/upt/upt

And if I want to contribute with a Merge-Request, I need to open an
account to the Gitlab instance of FramaGit and push my code to this
instance, even if I already have my fork living on my own Git
repository and I have no plan to use this FramaGit forge.

You're correct, you usually have to have an account on some forge website. Think of how various solutions scale: do you have to create an account on a forge website every time you make a commit?

And the point of that section was trying to think about what the forge website's button does for the committer, and less about stating that its superior or that we should definitely adopt it.

A lot of this thread has turned into a debate on specific tools instead of thinking about what the underlying problems are, and the various ways to address those. Tooling is an implementation detail.

Please note that it is not truly about the complexity of the steps but
about how many steps one is able to complete between two interruptions.

My intention was for it to be about the complexity/overhead in aggregate. Collected, I think all the steps, and their flags, and the decisions that go into which steps, and which flags, and what values, should be considered as "complex".

Well, it is a well-known issue about task switching [1]. :-)

1: https://en.wikipedia.org/wiki/Task_switching_(psychology)

That it is! And what's the first line in that article? "Task switching, or set-shifting, is an executive function[...]".

So are we unintentionally filtering out contributions from people with compromised executive functioning?



reply via email to

[Prev in Thread] Current Thread [Next in Thread]