Let’s face it: infrastructure management can be tedious. But what if I told you that SaltStack has some seriously useful features that can transform your experience from mundane to efficient?
While many configuration management tools like Ansible, Chef, or Puppet offer similar functionality, SaltStack distinguishes itself through its powerful event-driven architecture, efficient minion communication, and exceptional templating capabilities. These features make it particularly well-suited for managing complex, dynamic infrastructure at scale.
In this article, I’ll show you how to make SaltStack more pleasant to use, establish a proper development workflow for modules, and maintain a clean main branch while doing so.
Integrating Git with Salt for Better Version Control
Imagine having your entire infrastructure configuration tracked, versioned, and deployable with the same tools you use for application code. This isn’t just possible with Salt—it’s seamless.
Instead of manually copying files to your Salt master or using rsync jobs, GitFS allows you to use Git repositories directly as your source of truth for states and pillar data. This means automatic versioning, change tracking, and the ability to roll back to any previous state of your infrastructure. For configuration options and credential management, see the official documentation on using GitFS.
# Salt master configuration for GitFS integration
fileserver_backend:
- gitfs # Enable the GitFS backend
gitfs_provider: gitpython # Use GitPython for Git operations
gitfs_base: main # Default branch to use
gitfs_update_interval: 60 # Check for updates every 60 seconds
git_pillar_provider: gitpython # Use GitPython for pillar as well
git_pillar_update_interval: 60 # Check for pillar updates every 60 seconds
git_pillar_base: main # Default branch for pillar data
gitfs_remotes:
- ssh://git@gitlab.example.com/username/salt-control.git:
- root: state # Root directory for state files in the repo
ext_pillar:
- git:
- __env__ ssh://git@gitlab.example.com/username/salt-control.git:
- root: pillar # Root directory for pillar data in the repo
With this setup, your entire infrastructure becomes a Git repository. Every change is tracked. Every deployment is versioned. Every rollback is a simple git revert
away.
Understanding the __env__
Parameter
The __env__
parameter in the Git pillar configuration serves an important role in environment management. When Salt executes, it replaces __env__
with the current environment name (like “prod”, “dev”, or a Git branch name), automatically ensuring your pillar data matches your state files.
This parameter enables consistent environment isolation by:
- Matching pillar data to the same environment as your state files
- Preventing accidental deployment of development configurations to production
- Enabling branch-specific pillar data that stays synchronized with your state files
For example, when you run salt '*' state.apply saltenv=feature-branch
, the __env__
parameter ensures that pillar data from the feature-branch
branch is used, maintaining perfect consistency between your states and configuration.
This environment separation is crucial for maintaining a reliable configuration deployment process and preventing environment-specific data from being applied to the wrong targets.
Overcoming GitFS Synchronization Challenges
While GitFS is powerful, it’s important to note that synchronization can be challenging. Salt has a documented issue (Salt Issue #66793) that makes it difficult to update the fileserver from Salt orchestration.
I’ve developed a custom solution that addresses this issue without requiring any patches to Salt’s code. Here’s my implementation, which is derived from Salt’s own fileserver.py update function but modified to work in orchestration:
# /my/salt/repo/states/_runners/fileserver_hotfix.py
import salt.fileserver
def update():
"""
Update the Salt fileserver cache.
"""
my_opts = dict(__opts__)
my_opts.pop('__pub_user', None) # Remove the problematic key
fs = salt.fileserver.Fileserver(my_opts)
fs.update()
return True
This simple module removes the __pub_user
key that causes the issue. After committing and pushing this file, synchronize your GitFS:
salt-run fileserver.update
salt-run saltutil.sync_runners
Now, you can create an orchestration state file that handles the complete synchronization process:
# /my/salt/repo/states/orch/sync_all.sls
update_fileserver:
salt.runner:
- name: fileserver_hotfix.update
update_git_pillar:
salt.runner:
- name: git_pillar.update
- require:
- salt: update_fileserver
sync_all_modules:
salt.function:
- name: saltutil.sync_all
- tgt: '*'
- tgt_type: glob
- require:
- salt: update_git_pillar
refresh_pillar:
salt.function:
- name: saltutil.refresh_pillar
- tgt: '*'
- tgt_type: glob
- require:
- salt: sync_all_modules
update_mine:
salt.function:
- name: mine.update
- tgt: '*'
- tgt_type: glob
- require:
- salt: refresh_pillar
After committing and synchronizing again, you can run this orchestration to update everything:
salt-run state.orchestrate orch.sync_all
This approach ensures that your GitFS, pillar data, and mine information stay in sync, resolving one of the more challenging aspects of working with Git and Salt.
Note that in high-availability setups with multiple Salt masters, you may need to adjust this approach to ensure all masters are synchronized properly. Consider adding a salt-run command that targets all masters in such scenarios.
Repository Organization: Single vs. Multiple
“Should I keep my states and pillars in the same repository or separate them?” (See Salt’s best practices for more guidance.)
This is a common question in the Salt community, and there are compelling arguments for both approaches:
The Single-Repository Approach
When your states and pillars live together:
- Changes to your infrastructure and its configuration happen in one atomic commit
- Your Git history tells a complete story of how your infrastructure evolved
- You can make sweeping changes across your entire system without fear of misalignment
- Testing becomes dramatically simpler since everything moves together
Example scenario: A team managing a consistent application stack across multiple environments (development, staging, production) would benefit from a single repository. When updating the application’s configuration, they can change both the state files (how the application is installed and configured) and the pillar data (environment-specific variables like database credentials) in a single commit, ensuring everything stays in sync.
When to Use Multiple Repositories
- When different teams own the infrastructure code versus the configuration data
- When your security requirements demand stricter access controls for sensitive data
- When your pillar data changes frequently, but your states are relatively stable
Example scenario: An organization where the security team manages credentials and sensitive configuration while the operations team manages infrastructure code. In this case, keeping pillar data in a separate repository with stricter access controls allows the security team to update credentials without requiring changes to the infrastructure code, while still leveraging the same deployment mechanisms.
The beauty of Salt is that it supports both approaches equally well. You can start with a single repository and split later if needed, or vice versa.
Effective Testing with SaltEnv
Here’s where Salt really shines compared to other configuration management tools: the ability to test changes in complete isolation using Git branches.
Before: Traditional Testing Workflow
- Develop changes locally
- Push to testing environment
- Test changes
- If issues arise, revert changes or fix in place
- Deploy to production
After: Salt Branch-Based Testing Workflow
- Create a feature branch in your Git repository
- Develop and commit changes
- Push the branch to your Git remote
- Test using the branch name as the environment:
salt '*' state.apply saltenv=new-feature
- Iterate on the branch until everything works
- Merge to main branch only when fully tested
This command tells Salt to use the new-feature
branch for both your state files and pillar data, completely isolated from your production environment. It’s like having a parallel universe where you can experiment freely without fear of breaking production.
For even more focused testing, you can apply a single state file instead of running the entire top.sls configuration:
salt '*' state.sls mystate saltenv=new-feature
This allows you to test only the specific component you’re modifying, making the testing process faster and more targeted.
The Test Mode: Simulate Before You Apply
One of Salt’s most valuable features for testing is the test=true
parameter. This parameter performs a dry run of your state execution, showing you exactly what would change without actually making any modifications to your systems.
salt '*' state.sls mystate test=true
With test=true
, Salt will:
- Connect to your target systems
- Load all state files and pillar data
- Check current system state against desired state
- Report what would change (added, modified, removed)
- Exit without making any actual changes
For more complex state files, you can combine test=true
with your branch testing:
salt '*' state.sls mystate saltenv=new-feature test=true
This powerful combination lets you:
- Test against an isolated Git branch without affecting production code
- Simulate the execution without making actual changes
- Verify that your states target the correct systems
- Confirm that the changes match your expectations
The test mode is particularly valuable when working with destructive operations or when deploying to critical production systems, as it provides an extra layer of verification before committing to changes. Once you’re confident in the changes, you can run the same command without the test=true
parameter to apply them for real.
With pillarenv_from_saltenv: True
in your configuration, Salt automatically keeps your test data synchronized with your test code, ensuring consistent testing across environments.
This approach allows for thorough testing of complex infrastructure changes before merging to the main branch, significantly reducing the risk of issues in production environments.
Essential Debugging Tools for Salt States
Ever wished you could see exactly what Salt is thinking? You can, with these debugging superpowers:
show_sls
- Peek Behind the Curtain
salt '*' state.show_sls my_state
This reveals the raw SLS data structure after Salt has processed it, letting you verify your states are defined correctly.
Example output:
my_host:
----------
nginx:
----------
__env__:
base
__sls__:
nginx
pkg:
|_
----------
name:
nginx
- installed
|_
----------
order:
10019
show_low_sls
- See the Final Plan
salt '*' state.show_low_sls my_state
The “low state” is Salt’s final internal representation after all preprocessing is done. It’s especially useful when debugging complex states with multiple includes or extensive Jinja templating because it shows exactly what Salt will execute after all rendering and inheritance has been resolved.
Example output:
my_host:
|_
----------
__env__:
base
__id__:
nginx
__sls__:
nginx
fun:
installed
name:
nginx
order:
10019
state:
pkg
show_pillar
- Expose All Secrets
salt-run pillar.show_pillar my_host
This displays all pillar data available to a minion, which is crucial for tracking down missing or incorrect configuration values.
Example output:
my_host:
----------
nginx:
----------
port: 80
worker_processes: 4
show_top
- Understand the Hierarchy
salt '*' state.show_top
This displays which top files are applied to the minion and in what order, helping you untangle complex state inheritance.
Example output:
my_host:
----------
base:
- ssh
- nginx
These commands have saved me days of troubleshooting. When something isn’t working as expected, I don’t have to guess—I can see exactly what Salt sees.
Practical Jinja2 Template Debugging
Jinja2 templates are powerful but can be frustrating when they don’t behave as expected. Here’s my secret weapon for debugging them:
# Interactive Python shell example for debugging complex Jinja2 templates
>>> import jinja2
>>> context = {
... "users": [
... {"name": "alice", "groups": ["admin", "dev"]},
... {"name": "bob", "groups": ["dev"]},
... {"name": "charlie", "groups": ["ops"]}
... ]
... }
>>> template = jinja2.Template("""
... {% for user in users %}
... {% if 'admin' in user.groups %}
... {{ user.name }} is an admin
... {% endif %}
... {% endfor %}
... """)
>>> print(template.render(**context))
alice is an admin
Notice the extra blank lines in the output? This is where whitespace control becomes important, as we’ll see in the next section.
This approach lets you test complex templates outside of Salt, iterating quickly until you get exactly what you want. I’ve solved in minutes what would have taken hours of trial and error directly in Salt states.
Optimizing Whitespace in Jinja2 Templates
This might seem trivial, but proper whitespace handling in Jinja2 templates can make your Salt states dramatically more readable and maintainable.
Consider this example:
# Users
{% for user in users %}
{{ user.name }}
{% endfor %}
The rendered output would include an extra blank line before the first user:
# Users
(blank line here)
Alice
Bob
Charlie
However, with {%-
syntax (note the dash):
# Users
{%- for user in users %}
{{ user.name }}
{%- endfor %}
The output is cleaner without the extra empty line:
# Users
Alice
Bob
Charlie
This might seem like a small detail, but when you’re working with complex, nested templates, proper whitespace control becomes essential for maintaining sanity.
Conclusion: Making Infrastructure Management More Efficient
After using Salt extensively, these features have transformed infrastructure management from a challenging task into something much more manageable and satisfying. The combination of Git integration, environment isolation, robust debugging tools, and a methodology for testing Jinja templating creates a process that’s not just efficient but actually enjoyable.
The next time you approach an infrastructure change, remember these Salt features. They can significantly improve your configuration management experience.
As DevOps practices continue to evolve toward more GitOps-focused workflows and infrastructure-as-code becomes the standard, mastering these Salt techniques positions you well for the future. The ability to test infrastructure changes in isolation while maintaining strict version control aligns perfectly with modern continuous integration and delivery practices.
Try them out, and you might find that Salt becomes an essential tool in your DevOps toolkit.