Software Acceleration and Desynchronization
A bit more than a month ago, I posted the following on social media:
Seeing more reports and industry players blaming code reviews for slowing down the quick development done with AI. It's unclear whether anyone's asking if this is just moving the cognitive bottleneck of "understanding what's happening" around. "Add AI to the reviews" seems to be the end goal here.
And I received multiple responses, some that were going "This is a terrible thing" and some going "yeah that's actually not a bad idea." Back then I didn't necessarily have the concepts to clarify these thoughts, but I've since found ways to express the issue in a clearer, more system-centric way. While this post is clearly driven by the discourse around AI (particularly LLMs), it is more of a structural argument about the kind of changes their adoption triggers, and the broader acceleration patterns seen in the industry with other technologies and processes before, and as such, I won’t really mention them anymore here.
The model I’m proposing here is inspired by (or is a dangerously misapplied simplification of) the one presented by Hartmut Rosa’s Social Acceleration,1 bent out of shape to fit my own observations. A pattern I’ll start with is one of loops, or cycles.
Loops, Cycles, and Activities
Let’s start with a single linear representation of the work around writing software:

This is a simplification because we could go much deeper, such as in this image of what value stream mapping could look like in the DORA report:2

But we could also go for less linear to show a different type of complexity, even with a simplified set of steps:

Each of the steps above can imply a skip backwards to an earlier task, and emergencies can represent skips forwards. For the sake of the argument, it doesn't matter that our model is adequately detailed or just a rough estimation; we could go for more or less accurate (the “write tests” node could easily be expanded to fill a book), this is mostly for illustrative purposes.
Overall, in all versions, tasks aim to go as quickly as possible from beginning to end, with an acceptable degree of quality. In a mindset of accelerating development, we can therefore take a look at individual nodes (writing code, debugging, or reviewing code) for elements to speed up, or at overall workflows by influencing the cycles themselves.
For example, code reviews can be sped up with auto formatting and linting—automated rule checks that enforce standards or prevent some practices—which would otherwise need to be done by hand. This saves time and lets people focus on higher-level review elements. And the overall cycle can be made faster by moving these automated rules into the development environment, thereby tightening a feedback loop: fix as you write rather than accumulating flaws on top of which you build, only to then spend time undoing things to fix their foundations.
Concurrent Loops
So far, so good. The problem though is that this one isolated loop is insufficient to properly represent most of software work. Not only are there multiple tasks run in parallel by multiple people each representing independent loops, each person also is part of multiple loops as well. For example, while you might tackle a ticket for some piece of code, you may also have to write a design document for an upcoming project, provide assistance on a support ticket, attend meetings, focus on developing your career via mentorship sessions, and keep up with organizational exercises through publishing and reading status reports, and so on.
Here's a few related but simplified loops, as a visual aid:

Once again, each person on your team may run multiple of these loops in parallel during their workday, of various types.
But more than that, there might be multiple loops that share sections. You can imagine how writing code in one part of the system can prepare you or improve your contributions to multiple sorts of tasks: writing code that interacts with it, modifying it, reviewing changes, writing or reviewing docs, awareness of possible edge cases for incidents, better estimates of future tasks, and so on.
You can also imagine how, in planning how to best structure code for new product changes, experience with the current structure of the code may matter, along with awareness of the upcoming product ambitions. Likewise, the input of people familiar with operational challenges of the current system can prove useful in prioritizing changes. This sort of shared set of concerns informed ideas like DevOps, propelled by the belief that good feedback and integration (not throwing things over the fence) would help software delivery.
Basically, a bunch of loops can optionally contribute to one set of shared activities, but some activities can also be contributors to multiple loops, and these loops might be on multiple time scales:

Here, the activity of reviewing code might be the place where the coding loops gets straightforward reviews as desired, but it is also a place where someone's career growth plans can be exercised in trying to influence or enforce norms, and where someone looking at long-term architectural and growth pattern gets to build awareness of ongoing technical changes.
These shared bits are one of the factors that can act like bottlenecks, or can counter speed improvements. To make an analogy, consider the idea that if you were cycling to work 30 minutes each way every day, and sped up your commute by going twice as fast via public or private transit, you’d save 2h30 every week; however some of that time wouldn’t be “saved” if you consider you might still need to exercise to stay as fit physically. You would either need to spend much of your time saved on exercise outside of commute, or end up incidentally trading off commute time for longer-term health factors instead.
Applied to software, we may see this pattern with the idea of “we can now code faster, but code review is the new bottleneck.” The obvious step will be to try to speed up code reviewing to match the increased speed of code writing. To some extent, parts of code reviewing can be optimized. Maybe we can detect some types of errors more reliably and rapidly through improvements. Again, like linting or type checking, these ideally get moved to development rather than reviews.
But code reviewing is not just about finding errors. It is also used to discuss maintainability, operational concerns, to spread knowledge and awareness, to get external perspectives, or to foster broader senses of ownership. These purposes, even if they could be automated or sped up, can all indicate the existence of other loops that people may have to maintain regardless.
Synchronization and Desynchronization
If we decide to optimize parts of the work, we can hope for a decent speedup if we do one of:
- Continuously develop a proper understanding of the many purposes a task might have, to make useful, well-integrated changes
- give up some of the shared purposes by decoupling loops
The first option is challenging and tends to require research, iteration, and an eye for ergonomics. Otherwise you’ll quickly run into problems of “working faster yet going the same speed”, where despite adopting new tools and methods, the bottlenecks we face remain mostly unchanged. Doing this right implies your changes are made knowing they'll structurally impact work when attempting to speed it up, and be ready to support these disruptions.
The second is easy to do in ways that accidentally slow down or damage other loops—if the other purposes still exist, new activities will need to replace the old ones—which may in turn feed back into the original loop (e.g.: code reviews may block present code writing, but also inform future code writing), with both being weakened or on two different tempos when decoupled. This latter effect is something we’ll call “desynchronization.” One risk of being desynchronized is that useful or critical feedback from one loop no longer makes it to another one.
To cope with this (but not prevent it entirely), we have a third option in terms optimization:
- Adopt “norms” or practices ahead of time that ensure alignment and reduce the need for synchronization.
This is more or less what “best practices” and platforms attempt to provide: standards that when followed, reduce the need for communication and sense making. These tend to provide a stable foundation on which to accelerate multiple activities. These don’t fully prevent desynchronization, they just stave it off.
To illustrate desynchronization, let’s look at varied loops that could feed back into each other:

These show shared points where loops synchronize, across ops and coding loops, at review time. The learnings from operational work can feed back into platform and norms loops, and the code reviews with ops input are one of the places where these are "enforced".3 If you remove these synchronization points, you can move faster, but loops can also go on independently for a while and will grow further and further apart:

There's not a huge difference across both images, but what I chose to display here is that lack of dev-time ops input (during code review) might lead to duplicated batches of in-flight fixes that need to be carried and applied to code as it rolls out, with extra steps peppered through. As changes are made to the underlying platform or shared components, their socialization may lag behind as opportunities to propagate them are reduced. If development is sped up enough without a matching increase in ability to demonstrate the code's fitness (without waiting for more time in production), the potential for surprises goes up.
Keep in mind that this is one type of synchronization across one shared task between two high-level loops. Real work has more loops, with more nodes, more connections, and many subtler synchronization points both within and across teams and roles. Real loops might be more robust, but less predictable. A loop with multiple synchronization points can remove some of them and look faster until the few remaining synchronization points either get slower (to catch up) or undone (to go fast).
Not all participants to synchronization points get the same thing out of them either. It’s possible an engineer gets permission (and protection) from one, another gets awareness, some other team reinforces compliance, and a management layer claims accountability out of it happening, for example.
It's easy to imagine both ends of a spectrum, with on one end, organizations that get bogged down on synchronous steps to avoid all surprises, and on the other, organizations that get tangled into the web of concurrent norms and never-deprecated generations of the same stuff all carried at once because none of the synchronous work happens.
Drift that accumulates across loops will create inconsistencies as mental models lag, force corner cutting to keep up with changes and pressures, widen gaps between what we think happens and what actually happens.4 It pulls subsystems apart, weakens them, and contributes to incidents—unintended points of rapid resynchronization.
I consider incidents to be points of rapid resynchronization because they're usually where you end up desynchronizing so hard, incident response forces you to suspend your usual structure, quickly reprioritize, upend your roadmap, and (ideally) have lots of people across multiple teams suddenly update their understanding of how things work and break down. That the usual silos can't keep going as usual points to forced repair after too much de-synchronization.
As Rosa points out in his book, this acceleration tends to grow faster than what the underlying stable systems can support, and they become their own hindrances. Infrastructure and institutions are abandoned or dismantled when the systems they enabled gradually feel stalled or constrained by them, and seek alternatives:
[Acceleration] by means of institutional pausing and the guaranteed maintenance of background conditions is a basic principle of the modern history of acceleration and an essential reason for its success as well. [Institutions] were themselves exempted from change and therefore helped create reliable expectations, stable planning, and predictability. [...] Only against the background of such stable horizons of expectation does it become rational to make the long-term plans and investments that were indispensable for numerous modernization processes. The erosion of those institutions and orientations as a result of further, as it were, “unbounded” acceleration [...], might undermine their own presuppositions and the stability of late modern society as a whole and thereby place the (accelerative) project of modernity in greater danger than the antimodern deceleration movement.
The need for less synchronization doesn’t mean that synchronization no longer needs to happen. The treadmill never slows down, and actors in the system must demonstrate resilience to reinvent practices and norms to meet demands. This is particularly obvious when the new pace creates new challenges: what brought us here won’t be enough to keep going, and we’ll need to overhaul a bunch of loops again.
There’s something very interesting about this observation: A slowdown in one place can strategically speed up other parts.
Is This Specific Slow a Good Slow or a Bad Slow?
There’s little doubt to me that one can go through a full cycle of the “write code” loop faster than one would go through “suffering the consequences of your own architecture” loop—generally that latter cycle depends on multiple development cycles to get adequate feedback. You can ship code every hour, but it can easily take multiple weeks for all the corner cases to shake out.
When operating at the level of system design or software architecture (“We need double-entry bookkeeping that can tolerate regional outages”), we tend to require an understanding of the system’s past, a decent sense of its present with its limitations, and an ability to anticipate future challenges to inform the directions in which to push change. This is a different cycle from everyday changes (“The feature needs a transaction in the ledger”), even if both are connected.
The implication here is that if you’re on a new code base with no history and a future that might not exist (such as short-term prototypes or experiments), you’re likely to be able to have isolated short loops. If you’re working on a large platform with thousands of users, years of accrued patterns and edge cases, and the weight of an organizational culture to fight or align with, you end up relying on the longer loops to inform the shorter ones.
The connections across loops accrue gradually over time, and people who love the short loops get very frustrated at how slow they’re starting to be:
Yet irreversible decisions require significantly more careful planning and information gathering and are therefore unavoidably more time intensive than reversible ones. In fact, other things equal, the following holds: the longer the temporal range of a decision is, the longer the period of time required to make it on the basis of a given substantive standard of rationality. This illustrates the paradox of contemporary temporal development: the temporal range of our decisions seems to increase to the same extent that the time resources we need to make them disappear.
That you have some folks go real fast and reap benefits while others feel bogged down in having to catch up can therefore partially be a sign that we haven’t properly handled synchronization and desynchronization. But it can also be a function of people having to deliberately slow down their work when its output either requires or provides the stability required by the fast movers. Quick iterations at the background level—what is generally taken for granted as part of the ecosystem—further increases the need for acceleration from all participants.
In a mindset of acceleration, we will seek to speed up every step we can, through optimization, technological innovation, process improvements, economies of scale, and so on. This connects to Rosa’s entire thesis of acceleration feeding into itself.5 One of the point Rosa makes, among many, is that we need to see the need for acceleration and the resulting felt pressures (everything goes faster, keeping up is harder; therefore we need to do more as well) as a temporal structure, which shapes how systems work. So while technical innovation offers opportunities to speed things up (often driven by economic forces), these innovations transform how our social structures are organized (often through specialization), which in turn, through a heightened feeling of what can be accomplished and a feeling that the world keeps going faster, provokes a need to speed things up further and fuels technological innovation. Here's the diagram provided in his book:

We generally frame acceleration as an outcome of technological progress, but the idea here is that the acceleration of temporal structures is, on its own, a mechanism that shapes society (and, of course, our industry). Periods of acceleration also tend to come with multiple forms of resistance; while some are a bit of a reflex to try and keep things under control (rather than having to suffer more adaptive cycles), there are also useful forms of slowing down, those which can provide stability and lengthen horizons of other acceleration efforts.
Few tech companies have a good definition of what productivity means, but the drive to continually improve it is nevertheless real. Without a better understanding of how work happens, we’re likely to keep seeing a wide variation in how people frame the impact of new tech on their work as haphazard slashing and boosting of random parts of random work loops. I think this overall dynamic can provide a useful explanation for why some people, despite being able to make certain tasks much faster, either don't feel overall more productive, or actually feel like they don't save time and it creates more work. It's hard to untangle which type of slowdown is being argued for at times, but one should be careful to classify all demands of slowing down as useless Luddite grumblings.6 It might be more useful down the road to check whether you could be eroding your own foundations without a replacement.
What do I do with this?
A systems-thinking approach tends to require a focus on interactions over components. What the model proposed here does is bring a temporal dimension to these interactions. We may see tasks and activities done during work as components of how we produce software. The synchronization requirements and feedback pathways across these loops and for various people are providing a way to map out where they meet.
Ultimately even the loop model is a crude oversimplification. People are influenced by their context and influence their context back in a continuous manner that isn’t possible to constrain to well-defined tasks and sequences. Reality is messier. This model could be a tool to remind ourselves that no acceleration happens in isolation. Each effort contains the potential for desynchronization, and for a resulting reorganization of related loops. In some ways, the aim is not to find specific issues, but to find potential mismatches in pacing, which suggest challenges in adapting and keeping up.
The analytical stance adopted matters. Seeking to optimize tasks in isolation can sometimes yield positive local results, within a single loop, and occasionally at a wider scale. Looking across loops in all its tangled mess, however, is more likely to let you see what’s worth speeding up (or slowing down to speed other parts up!), where pitfalls may lie, and foresee where the needs for adjustments will ripple on and play themselves out. Experimentation and ways to speed things up will always happen and will keep happening, unless something drastically changes in western society; experimenting with a better idea of what to look for in terms of consequences is not a bad idea.
1: While I have not yet published a summary or quotes from it in my notes section, it's definitely one of the books that I knew from the moment I started reading it would have a huge influence in how I frame stuff, and as I promised everyone around me who saw me reading it: I'm gonna be very annoying when I'll be done with it. Well, here we are. Grab a copy of Social Acceleration: A New Theory of Modernity. Columbia University Press, 2013.
2: Original report, figure 50 is on p. 75.
3: This example isn't there to imply that the synchronization point is necessary, nor that it is the only one, only that it exists and has an impact. This is based on my experience, but I have also seen multiple synchronization points either in code review or in RFC reviews whenever work crosses silo boundaries across teams and projects become larger in organizational scope.
4: I suspect it can also be seen as a contributor to concepts such as technical debt, which could be framed as a decoupling between validating a solution and engineering its sustainability.
5: I believe this also connects to the Law of Stretched Systems in cognitive systems engineering, and might overall be one of these cases where multiple disciplines find similar but distinct framings for similar phenomena.
6: Since I'm mentioning Luddism, I need to do the mandatory reference to Brian Merchant's Blood in the Machine, which does a good job at reframing Luddism in its historical context as a workers' movement trying to protect their power over their own work at the first moments of the Industrial Revolution. Luddites did not systemically resist or damage all new automation technology, but particularly targeted the factory owners that offered poor working conditions while sparing the others.
