Think back to a decade ago. Being a developer then looked a lot different than it does today.

  • AWS revenue was only $3.1 B—compared to over $80B just last year
  • Many companies still were building out their own real-time data centers
  • Cloudflare had only just proven how helpful it was in combating large-scale DDoS attacks
  • DevOps and deploying straight to production wasn’t a thing
  • React was barely open-sourced, and React Native didn’t exist yet
  • Cross-platform work was much more time intensive because Flutter wasn’t around
  • Monoliths were more common than microservices (and it wasn’t cool to hate on microservices yet)

Ten years is a lot of time for workflows to change. And no industry changes with more powerful ramifications than software development. Software touches every industry, and there are exciting shifts in progress that we feel will revolutionize the next decade of development—from how teams are composed to how they will generate the next wave of software. 

AI: Self-healing code and new team composition

We would be burying the lede if we didn’t start here. While many in the media act like AI is a recent phenomenon, those in the field know that Google Brain has led the way in Deep Learning and Natural Language Processing for years. And any CS student will happily tell you how NLP started in the 1940s after World War II before it significantly progressed in the 70s and 80s.

Top of mind for coders is if AI will replace them. The truth is mixed: 

Creativity and difficult problem-solving are what enable developers to thrive—this won’t change. But technical know-how and experience may fade in importance relative to these skills, allowing less technical people to thrive in jobs where previously a technical degree was required. There will be a new wave of low-code/no-code platforms that will be prompt-based. These platforms will become even more powerful in terms of the code they can generate, allowing people with limited coding knowledge to build more complex software than they can today. 

There will be “self-healing” code—AI that automatically detects when something breaks, identifies the issue, and writes the code to fix it. But sometimes, this AI fix might break something else, so having a human in the loop will always be required.

AI-assisted coding, like Sourcegraph’s Cody and GitHub’s Copilot, stands to have the most considerable negative effect on average or below-average developers. 10x engineers will use AI to become 100x engineers, but 1x engineers will soon cease to exist. AI will replace lower-level and monotonous coding as models get better and better. 

Ever wonder how Sourcegraph uses Cody? One of our engineers’ shares:

“[…] I’m writing up the description for the PR, and I’m struggling to describe the current situation, so I turn to Cody for help.

With a minimum of input (I didn’t have to craft a long and precise question),… pic.twitter.com/9MCs9x2ulA

— Sourcegraph (@sourcegraph) May 19, 2023

As a result, team structure will change significantly. Currently, most software teams exist in a variation of this structure:

Just as computational experts in the mid-1900s were eventually replaced with cells in a spreadsheet, large parts of the work responsibilities of QA and DevOps teams will be streamlined, like bug detection, testing, integration, deployment, and monitoring. So you might see a future team that looks like this:

Freed from maintenance and manual work, developers will become more creative and evolve into managers/overseers of particular processes. Sourcegraph’s Cody already helps automate many parts of the developer’s workflow, including compiling with Code Graph, Intelligent Search, and Code Insights that reveal trends in the code base.

One of the side-effects of AI’s popularity will be even greater adoption of Python—the scripting language of choice for machine learning. Python’s open-source and massive pre-built library supports many functions one would want to use with ML, big data, and cloud computing. Python projects like Astral, Pydantic, and Polars have already amassed a substantial community that is eager to deploy these libraries.

Ultimately, AI will allow for a faster, more accurate, and more robust software development process that we couldn’t be more excited about.

Collaborative Development—Code, comment, and deploy from the browser

As impressive as AI may be, we’re entering an era where collaboration on software will be so much easier, more effective, and more enjoyable than ever before. 

Real-time collaboration will be at the core of future IDEs/code editors—we’ve seen virtually every other software product move into the collaborative era (Adobe to Figma, Word to Google Docs, etc.) but not development (yet). Developers will collaborate in real time without waiting for someone to comment on their pull request. Developers will edit code collaboratively, review each other’s work, and communicate within one platform. Less technical people will write more code with AI assistance thanks to integrated learning and documentation, and more experienced programmers will be able to leverage more languages.

There’s also a good chance for further integration into other tools in the development lifecycle (PM tools, version control, CI/CD, cloud, etc.). Companies like StackBlitz offer a vision of what devs can expect as they go from prototyping a new idea and testing a library to reproducing bugs all in one browser tab. Feature-rich code editors will have features like integrated CLI, code actions, screen sharing, the ability to share complete code environments, and support for many languages simultaneously, allowing devs to save time instead of constantly switching between applications.

The UX of code creation is also changing. Apps are now deployed via the browser. In addition to the browser-based paradigm, we see IDEs of the future focusing on improved, personalized UX and eventually accounts for different interfaces beyond just text like touch and voice. 

Future code editors, like Zed and Warp, built on coding languages like Rust will be faster, have a lower memory footprint, and have a much lower insertion latency than legacy IDEs built on JavaScript or TypeScript. Zed builds with multiple people in mind, so being able to call users, navigate code together, and work on any machine is simple and intuitive. 

This wave of collaborative development will usher in a fantastic era of software productivity and progress. Multiplayer FTW.

Edge computing—Faster, more private, and distributed

We’re becoming more demanding of our devices. We’re not doing simple text exchanges or web browsing. We’re using powerful AI models to expand photos, generate words, and predict patterns. This progress drives us toward edge computing, which is more distributed and inherently private.

As compute functions run on the device, user data is more protected, but it also requires a greater focus on edge security. This device computation will introduce new security and privacy challenges that startups will be gearing up to tackle.  

We’ll see a meaningful improvement in scalability as devs shift towards distributed edge architectures and compute resources are spread across multiple edge nodes.

While incumbents like Cloudflare, Akamai, Fastly, and the big cloud providers have the infrastructure in place to help with the shift to the edge, the startups in this market (Fly.io, Render, Zeet, and Pulumi) can out-execute on innovation and developer experience while disrupting the technical landscape.

Lastly, we see a lot of opportunity in edge native development frameworks—creating abstractions as Vercel did with front-end development. The app design of the future can seamlessly transition between edge and cloud and abstract away issues like device heterogeneity as well as latency and connectivity issues. 

Increased Abstraction—Simplified everything

Some of the most impressive technical achievements in software have been borne out of abstractions:

  • React → abstraction of Javascript
  • Docker applications → abstraction of Containerization
  • Infra/server mgmt from AWS Lambda and Azure Function—abstraction of Serverless
  • GraphQL → abstraction of APIs
  • Software dev pipelines of CircleCI → abstraction of CI/CD
  • Data pipelines from DBT → abstraction of ETL/ELT/Reverse ETL

Now we are on the verge of newer abstractions like WebAssembly, or Wasm—which allows portable–code written in any language to be compiled and executed in any browser/platform that supports Wasm (most modern browsers). With Wasm, developers can finally “write once, run anywhere,” simplifying software extension into other languages and environments, and developers can work in any language from anywhere, thanks to a revolutionized application deployment process. 

Wasm enables code reusability across different platforms—browsers, applications (server-side and desktop), and IoT. It saves time having to rewrite code for each specific platform. And companies like Dylibso, which we led the seed round of funding for, are making it easier than ever for developers to run and keep Wasm in production.

Modsurfer, a dashboard from Dylibso, offers a quick overview for teams building with Wasm. Abstractions like this are making Wasm more accessible.

We’re bullish on abstractions like Wasm that enhance the developer workflow. Wasm does this by offering language flexibility, performance improvements, code reusability, browser compatibility, debugging, and improved security. So developers can increase the quantity and quality of the software they produce. 

Abstractions are coming to other parts of the developer workflow, too, like cloud compute (Modal), the deployment process itself (Jetpack, Railway), and then obviously a lot in AI like scaling machine learning workloads (Anyscale), running models in the cloud (Replicate), and training models (Mosaic) to name a few. We’re heading towards a promising future where all code is more customizable, faster to deploy, and more secure.

Reinvention of Data Caching at the Edge—An extensive decoupling

That spinning bubble where an app is fetching data? Hopefully, this will be a relic of the past, a funny memory we get to tell grandchildren about, like the sound of dial-up internet. We are amid a grand reinvention in data caching that will significantly improve the performance of applications.

Trends in data caching right now include:

  • In-memory caching temporarily storing data in a convenient location (like a device) which improves performance but can create security risks. 
  • Edge caching— dynamic content caching for real-time personalization of content. While Fastly capitalized on this, we expect to see a rise in edge caching as a service.
  • Intelligent caching algorithms— an area where we expect to see AI help a lot, helping decide what gets cached where based on usage.
  • Serverless caching— this will significantly aid in scaling.

These trends mean developers must think carefully about the applications they want to build and how they are cached. That data access layer requires a design where APIs will integrate effectively. They must also consider cache performance, architecture, data consistency, and coherency. 

Every year the amount of data generated goes up. Now with AI, the quantity and quality of data is much higher fidelity, and much of it is generated at the edge (see edge section above). Developers will need a more efficient and easier-to-use system to help them process and cache all that data. While there are a lot of incumbents in this space (Cloudflare, Fastly, Akamai) and a lot of infrastructure required to cache data effectively, we think there’s already a decoupling of data storage (Snowflake), caching (Redis, Dragonfly DB, Readyset, etc.), movement (Confluent, Red Panda, Pulsar, Fink) and analysis (Tableau, Looker, etc.).

Whenever technology is unbundled in this way, several gaps open up and can be entry points for motivated teams. We’re hoping this means no one has to wait for their data to load in the future.

Searching for explorers

Developers are a discerning customer base. They can be skeptical yet open-minded because every developer fundamentally understands that all processes can be improved. Here are the software processes and problems we believe represent enormous potential:

  • How to use generative AI to improve the process of data/code generation, bug detection, testing, and storage.
  • How coding itself can be made more collaborative, personalized, and efficient.
  • How to embrace and account for the scaling needs of edge computing.
  • Which existing tech is ready for a simplified abstraction?
  • What will help devs cache their data on the edge?

At Felicis, we’ve been proud to work with companies like Semgrep, Supabase, Weights & Biases, Sourcegraph, n8n, Dylibso, Meilisearch, and Stream that are inventing a better developer experience. But there are still so many areas that need attention. If you’re exploring any of these problems like the ones above, please contact us. We’d love to hear from you and collectively imagine the fantastic possibilities coming our way.

Read More