1. Background and Problem Definition
In modern software engineering practices, especially in fields tightly coupled with hardware like embedded systems, standardizing the development environment is a fundamental and critical challenge. When team members perform cross-compilation for specific hardware platforms (e.g., RK3568, RV1126) on different operating systems (Windows, macOS, Linux), different versions of the same OS (Ubuntu 18.04, 20.04, 22.04, etc.), or even on the same OS version but with different gcc/python/glibc versions, environmental heterogeneity leads to a series of predictable engineering problems:
- Inconsistent Artifacts: Due to differences in developers’ local toolchains and system library versions, builds may succeed in some environments but fail in others, or produce target files with inconsistent behavior.
- High Initial Setup Cost: When new members join a project, they need to follow lengthy documentation to manually install and configure cross-compilers, specific versions of dependency libraries, and SDKs. This process often takes hours to days and is highly prone to errors.
- Complex Maintenance: When the underlying SDK or toolchain needs an upgrade, it must be ensured that every team member updates their local environment synchronously. This lacks atomicity and consistency guarantees, increasing the burden of technical management.
- Complex Code Version Control for critical SDKs: In embedded development, different products likely use different SoCs, resulting in several completely different sets of SDKs. If the SDK code for a SoC is packaged and sent to everyone, and the receiving developers unpack it, this process must be repeated for any new features. Even ignoring that a single SDK can take up 20GB~100GB of space, the time cost for the entire team and the strain on CPUs are unacceptable.
To systematically solve the above problems, we designed and implemented a standardized development environment solution based on Docker. This article will focus on its top-level architectural design, particularly the core idea of being “configuration-driven.”
2. Core Architecture: Separation of Configuration and Build Logic
The cornerstone of this solution’s architecture is the Separation of Concerns principle, specifically manifested by decoupling the volatile Configuration from the relatively stable Build Logic.
Build Logic: Refers to a set of instructions defining “how to build an environment.” In this project, it consists of multi-stage definitions in a
Dockerfileand a series of Shell scripts called during the build process. This part represents the generic steps for building a usable environment and is highly reusable.Environment Configuration: Refers to a set of parameters defining “what kind of environment to build.” These parameters include platform-specific SDK download URLs, cross-compiler toolchain version numbers, project source code paths, etc. They are variables in the build process and need to be managed externally.
To achieve this decoupling, we designed the following key components:
build-dev-env.sh: Unified Entrypoint for the Build Process
This script is the single entry point for the entire build system, shielding developers from the complexity of the underlying docker build command. Its core responsibilities include:
- Parameter Parsing: Receives the target platform as an input parameter (e.g.,
rk3568). - Configuration Loading: Locates and loads the corresponding configuration file based on the input platform parameter.
- Build Execution: Passes the loaded configuration as environment variables to the Docker build engine and starts the build process.
This way, developers only need to focus on the target platform they want to build, without needing to understand the internal details of the build process.
Usage Example
./build-dev-env.sh rk3568
configs/ Directory: Centralized Configuration Management
This directory serves as the repository for all environment configuration parameters. Its internal structure further reflects the idea of layered configuration:
platform-independent/common.env: This file defines global configurations applicable to all target platforms. For example, the internal Docker Registry address, global HTTP/HTTPS proxy settings, organization-wide Git user information, etc. By centrally managing these shared configurations, we avoid redundant definitions in multiple files and improve maintainability.platforms/*.env: Each.envfile in this directory corresponds to a specific hardware platform. For example, therk3568.envfile would contain parameters unique to that platform, such asRK3568_SDK_URLandRK3568_GCC_VERSION. This design makes adding support for new platforms extremely efficient—theoretically, it only requires adding a new configuration file in this directory without modifying any existing build scripts.
Architecture Overview Diagram
The following diagram shows the interaction and data flow between the components:
graph TD
subgraph "A. User Interface Layer"
A1[Developer executes: <br/>./build-dev-env.sh rk3568]
end
subgraph "B. Control & Configuration Layer"
B1("1. build-dev-env.sh parses 'rk3568'")
B2{"2. Configuration Loading"}
B2 --> B3[Global Config<br/>common.env]
B2 --> B4[Platform-specific Config<br/>rk3568.env]
end
subgraph "C. Build Execution Layer (Docker Engine)"
C1("3. Docker build process starts")
C2[Dockerfile]
C3[Script Templates Initialized]
C1 --> C2 & C3
end
subgraph "D. Output Artifact"
D1[Final Docker Image<br/>dev-env:rk3568]
end
A1 --> B1 --> B2
B3 & B4 -- Environment Variables --> C1
C2 & C3 -- Build Instructions --> D13. Standardized Workflow: From Build to Distribution
Based on the above architecture, we established a standardized workflow covering the entire lifecycle of the development environment:
Image Build Phase: This phase is initiated by the
build-dev-env.shscript. The script consolidates configurations fromcommon.envand the platform-specific.envfile to generate a highly optimized Docker image containing all necessary toolchains, SDKs, and dependency libraries, using Docker’s multi-stage builds (which will be detailed in a subsequent article).Container Management Phase: To simplify interaction with Docker for end-users, we provide a client-side script (
project_handover/clientside/ubuntu/ubuntu_only_entrance.sh) that wrapsdocker-composecommands. By executing this script, developers can conveniently create, start, stop, and enter the container’s shell environment, thus reducing their reliance on Docker knowledge.Project Handover Phase: Considering scenarios like new member onboarding or offline development, the system provides an automated packaging script (
project_handover/scripts/archive_tarball.sh). This script can package resources such as the client management script, necessary certificate files (harbor.crt), and documentation (ReadMe.md) into a singletar.gzarchive. This ensures that the entry point and documentation for the development environment can be versioned and distributed atomically as a whole.
4. Conclusion
By implementing an architecture that separates configuration from build logic, we have successfully transformed a complex, error-prone manual process into an automated, reproducible system. This system not only significantly reduces the setup time for new environments but, more importantly, provides the team with a stable and consistent development baseline, laying a solid foundation for future Continuous Integration (CI) and Continuous Deployment (CD) practices.
In the next article, we will delve into the technical details of the Image Build phase, including the application of Docker’s multi-stage build strategy and how to dynamically generate and configure scripts at build time using a templating mechanism.