The 2024.07.2 bug fix for TeamCity On-Premises just rolled out — download now and keep your servers at peak performance!
With every minor update, we deliver a significant number of bug fixes, resolve performance issues, and, most importantly, address security issues. Version 2024.07.2 addresses over 30 issues, including matrix build failures, incorrect information about the number of authorized agents, issues related to the newly released GitHub checks trigger, and more.
For the complete list of the issues fixed in this version, please refer to our release notes.
As with other minor updates, TeamCity 2024.07.2 shares the same data format with all 2024.07.x releases, allowing you to easily upgrade or downgrade within these versions without the need to back up or restore.
We recommend upgrading to apply the latest improvements and security fixes to your TeamCity server.
Before you start, read our upgrade notes and use one of the following options to upgrade:
We’re excited to announce the release of the updated TeamCity plugin for IntelliJ IDEA! 🎉 You can now download it directly from JetBrains Marketplace.
Using the plugin, you can trigger TeamCity builds directly from within your IDE and test any changes before committing them to the version control system.
Why get the new plugin?
This plugin has been built from the ground up to ensure it will eventually be able to replace the existing TeamCity plugin once support for the most frequently used and requested features has been added.
Here’s what’s new in the plugin:
We’ve added functionality enabling you to link TeamCity projects and build configurations to your IDE project so that you only see build configurations related to your IDE project.
With the help of theremote run feature, you can run build configurations on your local changes without committing them to the VCS.
The plugin’s tool window now contains a new Personal Builds tab where past personal builds are listed. It also shows live updates of all builds executed using the remote run feature.
Now it’s possible to select a build configuration and watch its build status for each commit in the VCS Log tool window.
Key benefits of this updated plugin include:
The ability to manually configure which TeamCity projects relate to your code, giving you more control over your builds.
Enhanced performance that significantly reduces lag between your actions in the IDE and the TeamCity server’s response.
We’re actively developing this plugin and planning to add even more features in upcoming releases. Your feedback is critical in shaping the tool to better meet the needs of IntelliJ IDEA developers.
You can install both the old and new plugin versions side by side, so feel free to compare and explore!
How to get started with the TeamCity plugin for IntelliJ IDEA
2. Once the plugin is installed, open your project in IntelliJ IDEA and invoke the plugin’s settings using the Tools | TeamCity (Experimental) | Settings… menu.
3. Click Log In and enter the following values:
Server URL – the HTTP(S) address of your TeamCity server.
Access token – your user access token that can be generated on the Your Profile | Access Tokens page in TeamCity.
With the new plugin, you can link build configurations from TeamCity directly to the project you have open. In the old plugin, this had to be configured through VCS roots, which wasn’t an easy process.
Now, users only need to create a given configuration once, and it will be saved in the source code. Everyone who downloads the project will then have it automatically configured and available without the need to set it up themselves.
Testing your local changes
One of the key benefits of the TeamCity IDEA plugins (both old and new) is the ability to run builds with your local changes before they are pushed to a remote branch, also known as a remote run. This allows you to spot issues without breaking the build for everyone else on your team.
Here’s how you can initiate a remote run from your IDE.
1. Make some changes to your code.
2. Go to Tools | TeamCity (Experimental) | Remote Run….
3. Then, under Remote Run… | Settings…, click the target build configurations that you want to run with your local changes. The plugin will then remember your choice and run builds for the same configuration(s) on subsequent remote runs. You can configure these project-configuration relations in the plugin settings.
Link your projects to TeamCity build configurations
Setting up project-configuration relations allows you to explicitly choose which configurations should be triggered depending on the introduced changes.
TeamCity’s IntelliJ IDEA integration enables you to choose the linking scope, selecting whether you want to link the whole project or only individual project modules to your TeamCity build configurations.
1. Click Tools | TeamCity (Experimental) | Settings… to open the plugin’s settings.
2. Choose the required Linking scope value:
PROJECT – allows you to link the entire IntelliJ IDEA project to the target build configuration(s). This option works best when you need to trigger builds of the same configuration(s) regardless of which part of your code changed.
MODULE – allows you to link individual modules to corresponding build configurations. For example, you can run both Build and Test configurations if the main module of your application changes, and only the Test configuration if you edit a separate module with unit and functional tests. This mode also benefits mono repositories where each module is a separate project with its own target build configuration(s).
Share your feedback
We’re still working on making the new plugin ready to replace the old one. For the time being, you can download both plugins – they won’t interfere with each other.
Is there any functionality that you’d like us to add to the new plugin? Let us know in the comments below! We want to make the plugin as useful as possible, and your feedback can help us do exactly that.
We’re excited to announce the public release of CodeCanvas, JetBrains’ new platform for orchestrating cloud development environments (CDEs). To better understand CodeCanvas and why it is important for JetBrains’ remote development strategy, let’s dive a bit into history and context.
Remote development trend
Many companies still rely on local development, a method that has several downsides, including onboarding difficulties, security risks, hardware limitations, and its inefficient use of developer time. All of this directly translates into higher costs for the business.
To address these issues, businesses have suggested using remote machines. Initially, this was supported through virtual desktop infrastructure (VDI) solutions, where only a “video stream” of the IDE from the remote machine was delivered to the local system. However, a major problem with this setup was the input lag when typing or moving the mouse.
Later, a new approach evolved – IDEs began supporting a split model. Heavy backend processes would run on a remote machine, while a local client would run only a lightweight UI that connects to the backend. This setup improved responsiveness and is what we now know as remote development. It first appeared in VS Code and later in JetBrains IDEs. This solution seemed to resolve the main issues: Code no longer needed to reside on local machines, and remote machines could scale as needed.
But, as the number of these remote machines grew, managing them became more complex. Either users had to be granted access to the cloud services to create machines themselves, or the IT department had to manage the machines for them. Additionally, the cost of running these machines was significant, and ensuring they were used efficiently became essential.
This gave rise to anew category of tools: cloud development environment (CDE) orchestrators. These platforms manage the lifecycle of CDEs, scale resources, and ensure cost efficiency.
CodeCanvas is JetBrains’ entry into this space that, in addition to general orchestration capabilities, offers many other valuable features.
Looking to the future, remote development and CDEs perfectly align with the growing trend of AI-assisted development. As autonomous AI developers emerge, they will require scalable dev environments to execute tasks. CDE orchestrators, like CodeCanvas, will let these AI systems create and manage their own environments through APIs.
CodeCanvas public release
CodeCanvas was initially launched silently in May 2024 so that we could gather early feedback from select clients. Now, it’s finally ready for public release. We’re starting with version 2024.2, which is already available to install.
Why CodeCanvas
The remote development orchestration market is still young, and existing solutions often have limitations. By working closely with our customers who need remote development, we’ve identified these pain points and are addressing them with CodeCanvas.
Our goal is to make working in a CDE feel no different than working in a local IDE, if not better. With CodeCanvas, developers no longer need to worry about cloning repositories, selecting the right IDE version, installing dependencies, starting services, first-time compilation, or indexing. In just 10–15 seconds, a fully prepared environment is ready, allowing developers to dive straight into coding.
In addition to core orchestration features, CodeCanvas offers:
On-premises installation: CodeCanvas is an on-premises solution deployed in Kubernetes clusters, currently supporting AWS, Google Cloud, and Azure. You can find more details about the architecture in the CodeCanvas documentation.
Advanced JetBrains IDE support: CodeCanvas provides first-class support for JetBrains IDEs, handling indexing, plugins, version management, and more. It supports most IntelliJ-based IDEs, including IntelliJ IDEA, PyCharm, Rider, and others.
VS Code support: For those who prefer a different editor.
Flexible dev environment configurations: CodeCanvas allows you to configure dev environments with as much CPU, memory, and storage as needed. The only limit is your cloud provider’s resources.
GPU support: Developers can create dev environments with GPU support, enabling them to run GPU workloads like ML training.
Automated preparation of CDEs: Use dev environment templates and lifecycle scripts to create pre-configured CDEs.
Ready-to-work environments: The warm-up feature helps developers start working in a CDE with already-built indexes, downloaded dependencies, and pre-built projects. Using a standby pool of pre-created CDEs, you can reduce the time to start a new environment almost to zero.
Security: Multiple authentication options, connection to dev environments via SSH, personalized environment settings, and a robust permission system.
Ease of administration: A web-based UI allows administrators to manage users and their access to cluster resources, balance costs, and more.
In TeamCity Pipelines, build runners tell the system how to build, test, and package your code.
For this latest release, we’ve revamped how build runners work. Here’s what’s new:
New design for creating a build step
With this most recent release of TeamCity Pipelines, we’ve made creating build steps even more straightforward. You can now find, select, and configure the runners you need for a given build quickly and easily.
Let’s hear from TeamCity Pipelines Product Designer Tanya Konvaliuk about why and exactly how we decided to make these changes.
“The old setup for creating job steps worked fine for simple tasks, but it became too limited when we needed more complex runners. It couldn’t handle extra runners or customization well.
To fix this, we’ve introduced a new, more flexible design. It’s scalable, making it easy to add more runners and settings in the future.”
We’ve fixed some bugs and made several improvements for a better user experience with TeamCity Pipelines.
We’ve resolved the issue so that TeamCity Pipelines now accurately reflects when no artifacts are produced by jobs.
We’ve updated the system so that users remain on the pipeline settings page after renaming a pipeline.
Now, even longer job names fit in the container on thepipeline overview page.
Did you know?
In TeamCity Pipelines, you can make use of Linux, Windows, and per-minute macOS agents hosted by JetBrains. You can also install self-hosted agents.
You might choose self-hosted agents to have more control over the environment, meet specific hardware or software requirements, or optimize costs by using your own resources.
The TeamCity On-Premises 2024.07.3 bug fix is now available for installation! This update tackles around 20 issues, including problems with failing SSH agents on Windows, incorrect counts of pending changes and available agents, unexpected build cancellations, and more. Each bug fix not only improves performance, but also patches security vulnerabilities, making it highly recommended to stay up to date with these minor releases.
For the complete list of the issues fixed in this version, please refer to our release notes.
As with other minor updates, TeamCity 2024.07.3 shares the same data format with all 2024.07.x releases, allowing you to easily upgrade or downgrade within these versions without the need to back up or restore.
We recommend upgrading to apply the latest improvements and security fixes to your TeamCity server.
Before you start, read our upgrade notes and use one of the following options to upgrade:
One of the key components of TeamCity’s ecosystem is the bundled Amazon Cloud Agents plugin, which allows our customers to leverage cloud agents to scale the performance of their build farms on demand. Given its widespread use and importance, ensuring its optimal performance is essential.
As our user base and workload have grown, we’ve noticed some initial performance oversights becoming more pronounced, prompting a closer look at the plugin’s performance.
Performance issues with the Amazon Cloud Agents plugin
The main culprit for performance issues with the plugin was thread management. Each Cloud Profile would create its own pool to manage instance operations and additional service threads for internal purposes. When the number of Cloud Profiles was low enough, the performance hit was hardly noticeable.
However, the impact of this issue worsened as users kept adding more and more profiles. Given that TeamCity itself is a complex system operating hundreds of threads continuously, adding a considerable amount of additional threads is not something we should take lightly.
Implementing parameterizable shared thread pools for recurring and one-off tasks solves this problem. This approach will allow for asynchronous operations, like instance provision requests that don’t wait for an instance to start, to be executed promptly without needlessly straining the system.
But what happens when the number of threads exceeds the measured optimal amount for a system?
The short answer: it causes gradual performance degradation. Eventually, even a highly parallel system will suffer from excessive threads. Common problems include, but are not limited to, context switching and synchronization overhead (e.g. locks). Here, we’ll focus on context switching.
What exactly is context switching?
Context switching is a very complex topic with many technical details, but for the purposes of this post, we’ll keep it brief. A context switch is a fundamental OS process that saves the state of a running thread and restores the state of another thread. This includes saving and loading CPU registers, stack pointers, and other information crucial to continuing a thread’s execution from an arbitrary point.
What is the impact of this? Each thread is allocated a CPU time slice known as a “quantum”. However, the context switch overhead reduces the effective CPU time available for actual thread execution. This overhead might include processor cache misses and memory contention, depending on the system and workload.
As with any performance problem: measure, don’t guess.
The solution: applying the patch
Below are some performance charts that cover the period between July 1 and August 25. The patch that aims to fight the aforementioned issues was applied around the middle of this period, on August 9.
The first graph shows the thread count.
Although the reduction might look dramatic at first, the starting point is around 575 threads, so the overall reduction in threads is ~25%, or ~250 threads.
The next graph shows the queue size of builds waiting to run.
Before the patch was applied, the chart hit 16,000–20,000 builds on multiple occasions, with frequent spikes above 10,000 queued builds. After August 9, the chart clearly becomes more stable and shows much more moderate spikes.
Graph showing the number of starting cloud agents.
Lowering the average values means we process agents’ provision requests more efficiently and, as a result, cloud agents start noticeably faster. That’s exactly what we observed on August 9: the reduction of both spike frequency and their maximum values.
What does this mean for our users?
Performance metrics are a great tool to measure the result of your efforts, but one can argue these efforts are more or less futile if end-users gain no real advantages. With cloud agents, this is definitely not the case – users directly benefit from faster build processing, shorter build queues, and less time required for an agent to wind up.
And as a cherry on top, eliminating so many threads should raise the overall performance and responsiveness of the entire system, ultimately making it more stable and efficient.
In March 2024, we announced the Beta release of TeamCity Pipelines, a new approach to CI/CD that offers blazing-fast pipelines to optimize your development flow. After six months of fine-tuning, adding features, and gathering feedback, we’re excited to announce that TeamCity Pipelines is officially going GA (General Availability) 🎉
This is a major milestone for us, and we couldn’t have done it without your insights and support throughout the journey.
TeamCity Pipelines is packed with exciting new features and ready to handle your CI/CD workflows with ease. Let’s take a closer look at what’s new.
What’s new in TeamCity Pipelines
Powerful YAML functionality
In TeamCity Pipelines, you can build pipelines visually or with YAML. With YAML autocompletion, real-time suggestions help you write pipelines faster and with fewer mistakes – like having a CI/CD co-pilot by your side!
Visual drag-and-drop CI/CD pipeline editor
Imagine Figma or Miro, but for CI/CD – you can easily define job dependencies, reorder tasks, and map out your pipelines visually with TeamCity Pipelines’ drag-and-drop editor.
It’s not just about making pipelines look pretty (although they do look fantastic, don’t they?) – it’s about making the entire process more intuitive and efficient. Even if you’re not a YAML expert, you’ll be able to create and edit pipelines with ease.
Dependency cache
In addition to other pipeline optimization features that speed up builds by up to 40%, we’ve added the dependency cache option. The first time you run your Maven builds, TeamCity Pipelines will cache those dependencies. In future builds, it reuses that cache, meaning faster builds and less load on your infrastructure.
Self-hosted agents
With self-hosted agents, you can now hook up your own build machines to TeamCity Pipelines. Whether you’re using your own data center or cloud infrastructure, this gives you the flexibility to leverage your existing hardware, maintain security, and scale as needed.
Agent terminal
If you need to check the environment of an agent that runs your build, the agent terminal feature is what you need. You can open the terminal and connect it directly to the agent during a job to view logs, check installed software, or debug issues – all from the UI.
The newest additions
We release a new version of the product every three weeks. With the latest update, we’ve added some pretty cool features. Here’s what’s new.
New VCS providers
In addition to GitHub, you can now also create pipelines for your GitLab and Bitbucket projects.
Clear indications why a job doesn’t start
Sometimes due to a misconfiguration, the job will never star. Perhaps there are no compatible agents that meet the set requirements or they’re all busy at the moment.
Now, TeamCity Pipelines provides a clear explanation of why exactly the build doesn’t start and what you can do about it to get your builds up and running.
Self-hosted agents: improvements
Along with JetBrains-hosted agents, you can run jobs on self-hosted agents and set requirements like OS, CPU count, architecture, RAM, or custom specs.
In this iteration, we added grouping available agents by the OS type. If you set agent requirements that can’t be met, TeamCity Pipelines will also let you know.
Gradle cache dependency
The Enable dependency cache option lets TeamCity cache dependencies from the first pipeline run and is now available for the Gradle runner too.
Learn more about what’s new in TeamCity Pipelines in our regular Pulse newsletter.
If you’ve been with us throughout the Beta period, first of all – thank you! Your feedback helped shape TeamCity Pipelines into what it is today. For those who are new or haven’t tried it in a while, there’s never been a better time to jump in and check out the latest possibilities.
You can try out TeamCity Pipelines completely for free for 14 days.
As always, we’re looking forward to your feedback! You help us make the product better with every release 🫶
Godot, an open-source game engine known for its versatility and simplicity, is gaining popularity among both indie developers and the broader game development community. While Godot comes with its own scripting language, GDScript, it also supports the widely-used C# language, familiar to Unity and Unreal Engine developers.
In this tutorial, we will use “Dodge the Creeps”, a popular beginner project for learning the Godot game engine. “Dodge the Creeps” is a simple 2D game where players avoid enemies (creeps) while learning essential game development concepts. We will set up automated build pipelines to build, test, and publish the game using TeamCity and work with both the GDScript and C# versions of the game.
Godot with GDScript
Build and export GDScript game for Windows target
In TeamCity, there are two primary ways to configure build chains: through the UI or using the Kotlin DSL (Configuration as Code). Each method has its advantages depending on your needs.
UI configuration is the more traditional approach, where users set up and manage build configurations directly within TeamCity’s graphical interface. It’s intuitive and accessible, making it easy to quickly define builds, adjust settings, and link build steps together without needing any coding skills. The downside is that this approach can be cumbersome for larger, more complex pipelines.
The TeamCity Kotlin DSL allows you to define build chains programmatically. This approach offers version control and reusability, making it ideal for complex build setups and automation.
In these examples, we will show you how to set up Godot pipelines both ways. We will use the barichello/godot-ci Docker images (barichello/godot-ci:4.2.1) to set up the Godot environment. This includes the Godot engine itself, export templates, and the Butler utility for publishing games to Itch.io.
UI configuration:
In the TeamCity UI, create a new build configuration.
Set up your version control system (VCS) settings to pull the code from your repository.
Add a build step using the Command Line runner.
Configure the build step with the following commands to build your Godot game:
The first command is necessary to open the editor in headless mode to import all assets, and the second command is then used to export the game for a specific platform (like Windows) in release mode, packaging everything into the final build.
Kotlin DSL configuration:
If you’re working with the Kotlin DSL (Configuration as Code), you can define the build configuration programmatically:
Here, we see the result of a build as a published artifact. To publish the artifact, it must be configured in the “General settings” tab of the build configuration.
As you can see, building a Godot game in TeamCity is a straightforward process. For smaller or simpler projects, the UI-based configuration offers an intuitive way to quickly set up build pipelines without the need for coding. However, for more complex build setups, the Kotlin DSL provides a more powerful, scalable, and reusable solution through code-based configuration.
Unit tests reporting
In recent examples, we used the command line runner, which means tests aren’t detected automatically by TeamCity as they usually would be. However, we can still use the XML Report Processing build feature to handle the test results.
For the GDScript version of Godot, several unit testing frameworks are available, with the most popular being GdUnit, WAT, and GUT. All of these frameworks support JUnit standard XML reporting, which TeamCity can easily import and process using its XML Report Processing feature.
While GDScript offers strong integration with Godot as a default language, C# provides distinct advantages. Since C# is compiled, it performs significantly better, often running up to four times faster than the interpreted GDScript, which can be crucial for gameplay optimization.
As a strongly typed language, C# reduces potential coding errors, making it easier to understand and maintain complex code. Additionally, using .NET opens up access to powerful development tools like JetBrains Rider, dotTrace, and dotMemory, improving workflow and debugging.
Building and Exporting a .NET 8 game
Building a .NET project in Godot is similar to building a GDScript project. The main difference is that we will use a different Docker image (barichello/godot-ci:mono-4.2.1), and we will need to install .NET separately, because the image contains only mono.
Unit testing C# game with on-the-fly test reporting
First, we add the TeamCity.VSTest.TestAdapter NuGet package to get on-the-fly test reporting during the build. Unit tests project file should look like this:
In this example, the script begins by setting up the .NET environment, downloading the necessary dotnet-sdk (version 8.0 in this case). It then imports the game assets into Godot, runs the unit tests, and finally exports the game to a Windows executable. TeamCity’s blockOpened and blockClosed service messages allow developers to clearly see each stage of the process in the build logs, making it easier to debug and track.
Publish the game
WebGL to S3
The next step is distributing the game to your players. In this case, we’re automating the upload of the game to platforms like S3 for HTML5 builds.
Publish exported Windows game to iItch.io using butler tool
Butler is a command-line tool created for developers to easily upload, update, and manage their games or digital projects on the indie game platform itch.io. Instead of manually uploading files through the website, Butler automates the process, ensuring smooth, incremental updates without re-uploading entire files.
It also provides version control, allowing developers to push only changes, which reduces upload time. Ideal for game developers who frequently update their projects, it simplifies deployment, improves workflow efficiency, and ensures users always have the latest version of their content.
As you can see, setting up a Godot build pipeline with TeamCity is relatively straightforward, largely because of the Godot engine simplicity. This is the reason why we don’t provide dedicated support in the form of a runner.
However, if your scenario is more complex and you believe better support is needed, please feel free to share your suggestions in the comments section of this blog post.
We’re excited to bring you an exclusive interview with Nana Janashia, the mastermind behind the largest DevOps YouTube channel – TechWorld with Nana. Nana has built an incredible career, sharing valuable insights about automation, cloud-native technologies, and everything DevOps with her global audience. But did you know that her journey started in marketing?
In this interview, Nana shares her personal story of how she went from being a marketing student to a DevOps expert. She’ll cover how she navigated the challenges of learning programming, landed her first internship, and eventually discovered her passion for DevOps and Kubernetes. It’s an inspiring journey for anyone considering a career shift or diving into the tech industry.
Nana sat down with Marco Behler, Product Manager for TeamCity Pipelines, to explore JetBrains’ latest CI/CD solution – TeamCity Pipelines. Watch as Nana tries out the tool for the first time and shares her candid thoughts on its features, ease of use, and how it compares to the competition. From CI/CD pain points to the latest DevOps trends, Nana and Marco cover it all, wrapping up with her final verdict on how TeamCity Pipelines stacks up.
Curious to hear her thoughts on TeamCity Pipelines? Check out the full video to get her live review and verdict on our newest CI/CD platform!
Adopting remote development is a significant decision for any company. At JetBrains, we talk to many customers about this shift. While we see growing demand, we also encounter many misconceptions about what remote development with cloud development environments (CDEs) can and cannot do.
1. What problems can you solve with remote development
Before diving into whether CDEs make sense for your business, let’s identify the core problems that remote development helps to solve:
Non-productive time Much development time is wasted on setting up development environments (e.g. when onboarding new staff or dealing with “works on my machine” issues), switching branches for small tasks (IDEs need to reindex the codebase, rebuild the project, etc.), and waiting for builds to complete. This time could be better spent on actual development work. CDEs can help minimize these delays by providing standardized, ready-to-use environments for each task.
Security risks For industries like finance, healthcare, or companies that use contractors, local development poses numerous security risks: Company code is more vulnerable when stored on local machines, and the possibility of policy violations or data breaches is greater. With CDEs, code and sensitive data are stored in the cloud, with strict access control and monitoring.
Local machine limitations Developers often face hardware constraints, such as insufficient RAM, CPU, or GPUs for heavy machine learning tasks. CDEs scale with your needs, providing the necessary resources on demand to eliminate these limitations.
However, the simple fact that you’re facing these challenges doesn’t automatically mean that remote development is the right solution for your team. Whether it’s worth adopting remote development depends on various factors, such as the scale of your team, your development workflows and infrastructure, and many others. This post will guide you through these considerations in the form of a questionnaire. We’ll explore key areas like:
Organization scale: How the size and distribution of your team affect the need for remote development.
Development process: The type of projects you work on and your development workflows can heavily influence how beneficial CDEs will be for your team.
Security and compliance: How remote development environments can help you meet security and compliance requirements.
Infrastructure and resources: How your current infrastructure and internet connection may impact the usage of CDEs.
Additional considerations: Other factors like software licensing and disaster recovery that might affect your decision.
JetBrains CodeCanvas
In September 2024, we announced the release of CodeCanvas, our solution for remote development. CodeCanvas is a CDE orchestration tool that can help you centralize the configuration of dev environments for specific projects, manage the CDE lifecycle (from creation to deletion), and benefit from the support of the majority of JetBrains IDEs and VS Code. For more details, check out our announcement blog post and watch the overview video.
2. Organization scale
a. Number of developers in your company
0–30 developers For smaller teams, traditional local development may still be a cost-effective solution. The overhead of managing cloud infrastructure, setting up CDEs, and maintaining cloud resources might not justify the benefits of CDEs.
Recommendation: Stick to local development unless you need CDEs for other reasons, such as enhanced security, standardized environments, etc.
30+ developers At this scale, managing multiple development environments can become a challenge, especially when different projects require different configurations. Here, remote environments can simplify onboarding and make transitions between projects easier. With tools like CodeCanvas, setting up and managing these environments at scale becomes more efficient.
Recommendation: Consider hybrid solutions, where some environments are remote and others are local.
100+ developers Managing local development at scale can be highly inefficient. As your team grows, CDEs simplify scaling by enabling centralized management, enforcing security, and minimizing local machine setup.
Recommendation: CDEs are highly recommended at this scale.
b. Number of projects and project complexity
It’s challenging to provide a definitive answer based solely on the number of projects your company is developing. The impact of multiple projects on your development process depends on several key factors.
Key considerations:
Project complexity: Even a single project in development may require complex configuration: multiple modules, numerous dependencies, specialized hardware requirements (like GPUs for AI/ML tasks), and so on. These complexities are also constantly evolving, with updates to dependencies, frameworks, or hardware demands that each developer must keep up with. CDEs overcome these challenges by providing standardized and ready-to-use environments for all developers.
Developer workload and context switching: Consider how many projects a single developer works on and how frequently they switch between them. Frequent switching can lead to significant downtime due to a need to configure different local setups in the same local environment. CDEs eliminate the need to switch contexts in the same environment by providing pre-configured environments for each project.
Environment consistency: The more projects your team handles, the harder it is to ensure that all developers are working with the same environment configuration. Variations in local setups can result in the “works on my machine” problem. CDEs centralize environment configuration, ensuring every developer works with consistent setups across all projects.
Recommendation:
Few projects, low complexity: If your company develops a small number of simple projects, CDEs might not offer significant benefits, as the overhead of local environment management remains manageable.
Multiple projects, high complexity: CDEs are highly beneficial for companies managing multiple and/or complex projects – especially those involving AI/ML workflows. They reduce setup time, improve consistency, and help scale GPU resources efficiently.
c. Geographical distribution of your team
When considering CDEs, latency is the critical factor that can make or break the experience. Latency is recommended to stay under 100 ms for a smooth and responsive development workflow.
All your developers are in one location If your entire team is located in one place with fast and stable access to nearby cloud services, maintaining low network latency is simpler.
You have distributed teams across the globe Network latency can have a more noticeable impact on globally distributed teams. Latency greater than 100 ms can disrupt the responsiveness of CDEs, causing developers to experience delays while coding. To mitigate this, you must deploy dev environment clusters as close to your developers as possible, often in different regions. This reduces latency but introduces additional operational costs for setup and maintenance.
Recommendations:
Assess network conditions: Evaluate the latency between your developers’ locations and the cloud regions where CDEs can be hosted. Keep latency below 100 ms to ensure a smooth development experience.
Regional deployments: If feasible, deploy dev environment clusters in each region where your developers are located. This setup reduces latency but requires more operational overhead.
Hybrid approach: Consider a hybrid model where developers in regions with low latency to cloud data centers use CDEs while others continue with local dev environments.
d. Growth rate of your team
You have a stable team size If your development team is stable or only slowly growing, local environments or a traditional development setup might still be manageable. However, adopting CDEs can still provide significant benefits, like standardized environments, improved developer productivity, and better security. Other sections of this blog post address all these benefits.
You have a rapidly growing team For rapidly growing development teams, adopting CDEs is not just a benefit – it’s almost essential for managing scale, streamlining onboarding, and controlling infrastructure costs:
Resource scaling: As your team grows, managing individual machines and scaling hardware can quickly become overwhelming. CDEs can dynamically allocate the necessary compute resources for your team.
Onboarding efficiency: Onboarding new developers quickly is essential in a fast-growing team. Case studies show that new hires can jump straight into development with pre-configured environments, reducing onboarding time from days to minutes.
Cost efficiency: CDEs prevent overspending on idle resources by scaling down when not in use, allowing infrastructure to grow in line with team size.
3. Development process
a. Type of developed applications
Understanding the nature of the applications your team develops is crucial in determining whether CDEs are suitable for your workflow.
Server-side applications (web apps, backends, APIs) CDEs are well-suited for server-side development, which typically doesn’t require a graphical user interface (GUI) or specialized hardware. CDEs support port forwarding, allowing developers to run their applications remotely and access them through the web browser on their local machine.
Recommendation: CDEs are a great fit.
Client-side web applications As with server-side apps, if a client-side app doesn’t require a native GUI, CDEs can effectively replace local machines.
Recommendation: CDEs are a great fit.
Mobile applications (iOS and Android)
iOS apps: CDEs have limitations for iOS development because of Apple’s ecosystem requirements. Xcode (the essential tool for iOS development) doesn’t currently support remote development.
Android apps: Android development is supported in CDEs. However, there may be challenges running the Android Emulator, which is a separate resource-intensive application.
Recommendation: Remote development isn’t fully available to mobile devices yet. For iOS development, using local macOS machines or VDI solutions is the only viable option. For Android development, CDEs are supported, though there may be some nuances to consider.
Desktop applications CDEs are typically Linux-based (like CodeCanvas). You can use VNCto access desktop windows, which allows you to visually interact with applications. However, there are limitations, especially if you are developing for other operating systems. Even if you’re building cross-platform apps, testing and building for macOS or Windows still require their own respective environments.
Recommendation: CDEs are a good fit for Linux apps but not for macOS and Windows.
Game development Game development with engines like Unity or Unreal Engine requires running the engine application alongside your IDE. This means the engine window must be somehow shared between the remote dev environment and a local machine. Though this can be done via VNC, the problem here is that both Unity and Unreal Engine require GPUs for real-time rendering and other tasks. However, VNC doesn’t support hardware-accelerated rendering, meaning that while you can use VNC to view the remote desktop, it won’t perform well with these game engines.
Recommendation: Developing games in CDEs is not recommended at this point.
Specialized applications (embedded systems, IoT, hardware integration) Developing and testing applications for embedded systems, IoT devices, and hardware-integrated solutions often requires direct access to hardware, such as microcontrollers, sensors, and other external peripherals.
Recommendation: Stick to local environments unless you need CDEs for other reasons, such as enhanced security, standardized environments, etc.
b. Branching strategies
Branching strategies play a significant role in your development workflow and can influence the benefits of CDEs for your team.
Your team uses flows with feature branches If your team utilizes a branching strategy that involves creating feature branches for new features, bug fixes, or experiments (e.g. GitFlow, GitHub Flow, or others), CDEs can offer substantial advantages:
Isolated environments: Developers can easily create a dedicated CDE for each feature branch, ensuring that changes are isolated and do not interfere with other work.
Quick context switching: Developers can switch between different tasks or features by simply launching the corresponding CDEs. This means there are no overheads like those associated with changing branches locally, which often requires rebuilding indexes, fetching dependencies, and waiting for the environment to be ready.
Consistent setups: Each CDE is a fresh environment based on the branch’s code, reducing issues caused by leftover artifacts or configurations from previous work.
For example, a developer is assigned to fix a bug while already working on a new feature. Instead of stashing changes or juggling branches locally, they can:
Keep the CDE for the new feature runningю
Run a new CDE for the bug fix on the appropriate branch.
Switch between CDEs instantly, maintaining productivity and reducing context-switching costs.
Recommendation: Adopting CDEs can significantly speed up the development.
Your team uses flows without feature branches If your team employs trunk-based development or ad hoc commits to main, CDEs can still provide some benefits:
Fresh environments for each task: When starting a new task in the morning, developers must fetch the latest changes from the main branch. On large projects, this could mean pulling in 100+ commits each day. After fetching, the IDE needs to rebuild indexes and dependencies, which can take up to 30 minutes or more. With CDEs, the warmup feature automatically builds and prepares all the data overnight, meaning the developers can start their work almost immediately.
Reduced local setup overhead: CDEs eliminate the need to manage local environments, reducing issues related to configuration drift or dependency conflicts.
Task isolation: Even without branches, using separate CDEs for different tasks can help isolate work and prevent unintended interactions between changes.
Recommendation: Even without branches, CDEs can help maintain a clean working state and reduce setup time.
c. Code reviews and merge requests
Your team uses code reviews and merge requests Code reviews and merge requests that require approval are essential practices for maintaining high code quality. If your workflow includes these practices, CDEs can greatly enhance their efficiency:
Instant environment setup: Very few developers are willing to switch their entire local project to a review branch, as this can result in hours of setup – especially when new dependencies are introduced during the review. With CDEs, reviewers can quickly spin up a dedicated dev environment with a specific branch or commit under review.
In-depth analysis and experimentation: With a CDE, reviewers can open the reviewed code in a full-featured IDE, allowing them to navigate the codebase, understand the context, and even run/debug the code. Moreover, they can test proposed changes without affecting their local setup.
Your team doesn’t use code reviews and merge requests Without these practices, the benefits of CDEs in this context may be insignificant. However, we strongly recommend implementing code reviews and merge requests, at least for critical parts of your codebase, as they can greatly enhance code quality and collaboration.
Recommendation: If you rely on code reviews and merge requests, CDEs will perfectly fit into your workflow.
d. Onboarding of new developers
The complexity of your development environment can greatly impact the time it takes for new hires to become effective team members.
Quick onboarding (less than a day) If your current setup allows new developers to install and configure their local environments in just a few hours, CDEs may offer limited additional value.
Recommendation: Stick with local development unless you need CDEs for other reasons, such as enhanced security, standardized environments, etc.
Complex onboarding (several days or more) During onboarding, setting up a development environment with complex configurations, access setup, multiple dependencies, or large codebases can take days or weeks.
Recommendation: Adopt CDEs, such as CodeCanvas, to significantly reduce onboarding time. Pre-configured environments allow developers to avoid the time-consuming task of local setup and start working right away.
Frequent onboarding (rapid growth or high turnover)
If your company is rapidly expanding or experiencing high turnover, the cumulative time spent on onboarding becomes significant. Ensuring each new developer has a consistent environment is critical to maintaining productivity and reducing errors.
Recommendation: Regardless of how long onboarding takes in your company, CDEs will help you greatly improve this process.
e. Internal tools
Your company has platform teams building internal developer tools
In large organizations, platform teams often create specialized tools for developers, such as custom authentication mechanisms, CLI tools for managing cloud resources, and others. The challenge lies in delivering and configuring this tooling consistently across all developer machines.
With the centralized management of CDEs, this becomes much easier. Platform teams can ensure that all required tools, configurations, and updates are included in the standard CDE templates – all developers will work in properly configured environments without needing to install or update the tools themselves manually.
Recommendation: CDEs can significantly simplify tool adoption.
Your company doesn’t have such teams
Recommendation: Stick to local environments unless you need CDEs for other reasons, such as enhanced security, standardized environments, etc.
4. Security and compliance
a. Strict security or compliance requirements
You have significant compliance needs (e.g. fintech, healthcare) If your organization operates in industries with stringent security or compliance mandates – such as finance, healthcare, or government – CDEs can offer several security advantages:
Data isolation: With CDEs, your source code remains within the secure cloud infrastructure, reducing the risk of local device vulnerabilities or leaks. Of course, users can still retrieve the data from a remote environment if they really need to, and you can minimize this risk further with third-party solutions, such as data loss prevention (DLP) or monitoring tools.
Centralized control and role-based access (RBAC):Tools like CodeCanvas make it easier to enforce access controls, track activity, and comply with strict regulations like SOC 2 or HIPAA. Built-in RBAC ensures that only authorized personnel can access specific data, adding an extra layer of security.
Recommendation: CDEs are a valuable tool for meeting stringent industry regulations.
You have standard security measures For organizations without strict compliance requirements, CDEs still offer better security than local machines, simply “by design”. Code and data are housed in secure cloud environments, preventing the risks associated with local storage.
Recommendation: Evaluate CDEs for additional security benefits.
b. Use of contractors or third-party developers
Your team works with contractors or third-party developers When working with external teams, security is the main concern, and CDEs can be a great solution:
Fast onboarding via provisioning pre-configured dev environments.
Limited accesswith role-based access control (RBAC): Contractors have access only to the specific dev environments they need. Once a contractor completes their work, CDEs allow you to swiftly revoke access.
Recommendation: CDEs are highly beneficial.
Your team doesn’t work with contractors or third-party developers Even if contractors or third-party developers are not part of your workflow, CDEs may still offer benefits in terms of team management and security.
Recommendation: Evaluate CDEs for additional security benefits.
c. Need for audit trails and activity logs
Your team requires audit trails and activity logs CDEs can track key actions related to development environments, such as when a CDE is created, modified, or run for a specific project. This allows for transparent tracking of who accessed what and when, a critical requirement for security reviews and audits.
Recommendation: CDEs are recommended for teams requiring centralized tracking of environment-related actions: creation, usage, changes to configuration templates, and so on.
Your team doesn’t need audit trails or detailed logs Even if audit trails and detailed logging aren’t critical to your organization, CDEs may still offer benefits in terms of team management and security.
Recommendation: Evaluate CDEs for additional security benefits.
5. Infrastructure and resources
a. Infrastructure setup
Your team uses local machines only The truth about remote development is that using local machines will always be cheaper in terms of pure infrastructure costs. However, the benefits of CDEs lead to indirect savings:
Reduced non-productive time (NPT): Remote development reduces the time developers spend setting up environments, switching branches, or waiting for indexing. These tasks, often seen as downtime, are greatly minimized.
Lower hardware costs: With CDEs, developers no longer need powerful machines, as heavy computing tasks happen in the cloud. This approach significantly reduces the need to provide high-end hardware for every developer. If developers occasionally need more powerful hardware, they can access it through the cloud without needing a dedicated machine.
Your team uses virtual machines for development If you’re already using VMs, adopting CDEs with orchestration tools like CodeCanvas can further optimize your costs:
Scalable resources:CDEs offer dynamic scaling, ensuring that you only use resources as needed, preventing over-provisioning or leaving VMs idle.
Auto-shutdown:Automatically shutting down environments when they’re not in use helps reduce costs and avoids wasting resources.
Cheaper storage: Tools like CodeCanvas can automatically move data of inactive dev environments to more cost-effective storage, further reducing infrastructure expenses.
Auto cleanup: Unused or abandoned CDEs can be automatically deleted, freeing up pricy cloud storage.
Your team has some infrastructure in the cloud (AWS, Azure, Google Cloud) For companies already using cloud infrastructure, integrating CDEs into the existing setup can be a smoother and potentially more cost-effective process:
Existing expertise: Your cloud experts can easily set up and maintain an orchestration tool like CodeCanvas.
Access to cloud resources: CDEs have built-in access to the resources hosted in the same cloud (via Kubernetes service accounts).
Cost benefits through scale: By moving the local development to the cloud, you might see cost advantages through bulk usage or negotiated discounts with cloud providers.
Recommendation: Be prepared for remote development to be more expensive in terms of infrastructure costs, but you may save indirectly through improved productivity, reduced expenses on local hardware, and optimized resource management. The actual savings depend on factors like team size and project duration – the more developers you have and the longer the time frame, the more you save.
b. Internet connection reliability
Remote development heavily depends on fast and stable internet connections.
Your team has strong, reliable internet If your internet is reliable and provides low latency (under 100 ms) to major cloud service providers (AWS, Google Cloud, Azure), then CDEs are a suitable option for you.
Your team experiences intermittent internet issues or has slower bandwidth With remote development, no internet means no development. Slow or unreliable internet can significantly impact productivity. Latency greater than 100 ms can make interactions with the remote IDE frustrating, causing delays during typing.
Recommendation: If you want to adopt CDEs, ensure your latency to cloud providers is consistently below 100 ms. Additionally, it’s vital to have a backup internet plan. Without the internet, you won’t be able to access your dev environments or code, so ensure you have a second provider, among other backup options.
6. Additional considerations
a. Software licensing and compliance
IDE licensing: When using CDEs, IDE licenses (e.g. for JetBrains IDEs) function as they would locally. Developers are required to have valid licenses to use their chosen IDE within the cloud environment, as CDEs do not manage or provide these licenses automatically.
Licensing for additional tools and dependencies: Licensing may be more complex for specialized development tools, frameworks, or dependencies. Some tools may have specific licensing models for cloud usage, such as geographic restrictions or limits on the number of users. Before deploying these tools in CDEs, it’s essential to verify whether additional steps are needed to comply with licensing terms.
License management: CDEs do not offer centralized management for software licenses. If you’re using multiple third-party tools, managing these licenses (e.g. tracking usage, renewals, and compliance) may require an external license management system to prevent over-deployment or violations.
b. Disaster recovery and business continuity
Recovery time: In traditional local setups, recovery time depends heavily on your ability to restore hardware, retrieve backups, and reconfigure environments. In CDEs, recovery times can be significantly reduced as dev environments are created on demand from pre-configured templates.
Internet dependency: Since CDEs depend on constant internet access, a failure in connectivity could result in a total development halt. It’s critical to have a backup internet connection in place or alternative local environments that can be activated in the case of extended internet outages.
Cost and complexity: Implementing a fail-safe infrastructure in CDEs can increase both the cost and complexity of your setup. However, the trade-off is enhanced resilience and potentially reduced downtime.
c. CDEs and AI development
AI and autonomous developers: As we move toward a future where AI-autonomous developers become a reality, CDEs will play a crucial role. Remote development offers the infrastructure and scalability necessary for AI agents to run. AI models can use CDEs to perform code generation, testing, and deployment autonomously.
We’re excited to bring you an update on our smooth parameter replacement feature. It’s already made parameter replacement easier, and now we’re taking it even further with some key improvements.
In this iteration, we’re rolling out the smooth parameter replacement feature to more areas. You’ll now find it in almost all runner fields except for Docker autocompletion.
We’re also bringing it to pipeline settings and job artifacts for a smoother experience, while expanding the API’s parameter suggestions and improving context sensitivity in both the UI and the API. In short, setting up your pipeline will be even easier and more intuitive.
In the next iteration, we’re planning to tackle some more complex areas, such as Docker autocompletion and fields like passwords. We’re also aiming to improve API search accuracy to make parameter replacement even smoother. Stay tuned!
Bug fixes and improvements
Downloading artifacts from the Pipeline page no longer results in a Requested module (build configuration) does not exist error. Now, downloading your artifacts, whether the whole archive or individual files, works as expected.
The dropdown tooltip for smooth parameter replacement has the correct width and is no longer squashed into a small narrow box.
We’ve streamlined the process of setting up self-hosted agents by checking Java compatibility right from the start. Now, the script instantly verifies that JAVA_HOME or JRE_HOME is set to the correct version (between 8 and 18), saving you time and avoiding setup delays.
We’ve improved the permissions system so that users with viewer access can now clearly see a warning message when trying to access a pipeline they don’t have permission to view.
Did you know?
We sat down with Nana Janashia, the mastermind behind the largest DevOps YouTube channel – TechWorld with Nana – to talk about her career in DevOps and how it started. Nana also gave TeamCity Pipelines a try.
Watch as Nana tries out the tool for the first time and shares her candid thoughts on its features, its ease of use, and how it compares to the competition.
That’s it for now!
As always, feel free to get in touch with us if you have any questions. Happy building!
We’ve got a few exciting updates about the Unreal Engine plugin announced in the previous blog post.
TL;DR – we’re adding Unreal Game Sync (UGS) integration and open-sourcing the plugin. These updates are all about making the CI/CD experience smoother for Unreal Engine devs and getting the community more involved.
UGS
Before diving in, let’s quickly go over what Unreal Game Sync (UGS) is for anyone who might not be familiar with it or could use a refresher. In essence, UGS is a lightweight UI for Perforce. Typically, you need to build it from source to get started, and while its graphical client is a WinForms application available only on Windows, there is a command-line interface (CLI) version for other platforms. UGS has been around for a while and is widely used by game studios working with Unreal Engine as a collaboration tool.
From a CI/CD perspective, UGS provides valuable insights into a project’s status (if properly set up), such as build statuses, the ability to flag specific changelists as problematic, and more. To give a better overview, here’s a rough diagram of the components involved:
There are quite a few components here, with the central one being the Metadata Server. While deploying it isn’t strictly necessary, it does enable the full feature set of UGS. This is also where CI/CD systems post build information. As shown, there are different possible implementations of the Metadata Server, and it’s worth briefly discussing each:
Epic Metadata Service. This is the original and longest-standing version of the Metadata Server. It requires Windows, IIS, and the older .NET Framework 4.6.2.
Third-party implementation. Thanks to the open-source nature of the server, it’s possible to create your own implementation. One example is RUGS, which is much easier to set up since it supports Docker.
Horde. Technically, this is a full-fledged automation platform recently introduced by Epic. It includes a built-in UGS Metadata Server as well as its own build system. Although it has a built-in metadata server, it doesn’t allow publishing from external sources – the transition to Horde assumes that all metadata is generated internally. Horde is a bit outside the scope of this blog post, so we’re only mentioning it for the sake of completeness.
Entities that the build system is supposed to post to the metadata server are called “badges” in UGS terms. These badges will then show up in the CIS (continuous integration status) column in UGS. It usually looks like this:
As far as we know, the metadata server endpoints don’t currently have authentication. It appears that the server is intended to be used within a secure, closed network, but this is just our understanding and not an official statement.
Publishing an arbitrary set of badges defined in your BuildGraph script. This applies to the “distributed” execution mode – a special runner mode in which the BuildGraph definition of the build is converted into a set of builds in TeamCity (build chain). For more details, please refer to our previous blog post or the plugin documentation.
The first scenario is pretty straightforward. You only need to configure the Commit Status Publisher build feature and set up a few required parameters.
The second scenario is more complex. In your script, you can define a set of badges and link them to specific nodes to be tracked. Before diving into the scripts, here’s a quick reminder of how the plugin maps BuildGraph entities to TeamCity entities:
BuildGraph
TeamCity
Node
Build step
Agent
Build
For example, if your build process includes compiling an editor, the script might look like this (with unimportant details omitted):
Here, we define a badge named “Compile Editor” to track the execution of a node with the same name. In distributed BuildGraph mode, TeamCity will recognize this badge and update the build status as the process progresses.
You can define multiple badges to track different sets of nodes, and TeamCity will monitor all of them based on the specified dependencies:
In this example, there are three agents (each with a single node) that can potentially run concurrently, as they are assigned to different agents and have no dependencies on each other. Each build is tracked by a corresponding badge.
The badge will behave as follows:
“Starting” – displayed as soon as any tracked dependency begins execution.
“Success” – shown when all dependencies complete successfully.
“Failure” – Indicated if any dependency encounters an error.
We have received a lot of feedback since the plugin was introduced in May this year. Thank you to everyone who shared ideas for further development, submitted feature requests, or reported bugs! We’ve also been asked several times whether we’re going to open-source the plugin and, if so, when. That time is now!
With this step, we hope to:
Increase transparency and trust in the plugin’s codebase.
Engage the community for contributions and improvements.
Speed up bug fixes and feature implementations.
The source code is now available on GitHub and the latest release is ready for download on the marketplace. We encourage you to submit feature requests, report any bugs you encounter, suggest enhancements, or fork the plugin and customize it to fit your needs.