[{"content":"Welcome to the documentation of MakinaStates, the deployment framework Reference Installation Usage Dedicated topics Operating a cluster API","description":"","section":"","subtitle":null,"tags":[],"title":"MakinaStates documentation","type":"page","url":"/"},{"content":"REMEMBER THAT FOR NOW YOU HAVE TO USE UBUNTU \u0026gt;= 14.04. Use root unless you understand well how it works to handle user install Download Get MakinaStates by cloning it from github Usually we install it in /srv/makina-states git clone http://raw.github.com/makinacorpus/makina-states /srv/makina-states Common install command Install with scratch nodetype, with refresh cron, logrotate. bin/boot-salt2.sh -C --install-logrotate --install-crons boot-salt2.sh, the makina-states installer \u0026amp; manager boot-salt2.sh will try to remember how you configured makina-states on each run. It stores configs in \u0026lt;clone_dir\u0026gt;/etc/makina-states You can see the help this way: bin/boot-salt2.sh --help # Short overview: bin/boot-salt2.sh --long-help # Detailed overview: If you want to install with default options (scratch) bin/boot-salt2.sh -C Optional post install steps Install the logrotate to rotate salt logs bin/boot-salt2.sh -C --install-logrotate Install the cron that refresh makina-states code every since and then (15min) bin/boot-salt2.sh -C --install-crons Install the salt \u0026amp; ansible binaries to /usr/local/bin. THIS IS NOT RECOMMENDED ANYMORE, AND CAN EVEN BE HARMFULL bin/boot-salt2.sh -C --install-links Related documents Usage Layout database.sls nodetypes","description":"","section":"install","subtitle":null,"tags":["installation","install","upgrade"],"title":"Installation","type":"install","url":"/install/"},{"content":"saltcall Wrapper We developped a special module to call saltcall on remote systems. You can use it via ansible: ANSIBLE_TARGETS=$(hostname) bin/ansible all \\ -m saltcall -a \u0026quot;function=test.ping\u0026quot; ANSIBLE_TARGETS=$(hostname) bin/ansible all \\ -m saltcall -a \u0026quot;function=grains.get args=fqdn\u0026quot; Or via a playbook like in our saltcall one , usable this way: ANSIBLE_TARGETS=$(hostname) bin/ansible-playbook \\ ansible/plays/saltcall.yml -m saltcall -a \u0026quot;function=test.ping\u0026quot; It\u0026rsquo;s better to use the playbook because it call the makinastates_pillar role to copy locally on the remote box the pillar computed by the salt+pillar bridge before executing the salt command.","description":"","section":"usage","subtitle":null,"tags":["topics","ansible"],"title":"Ansible saltcall module","type":"usage","url":"/usage/ansible/saltcall/"},{"content":"Command log level Remember that -lall refers to the loglevel all. You can lower the output level by lowering down to info (-linfo). Run a salt state bin/salt-call -lall --retcode-passthrough state.sls \u0026lt;STATE\u0026gt; Run a salt function bin/salt-call -lall --retcode-passthrough test.ping Configure a pillar entry pillar/pillar.d/myentry.sls --- makina-states.foo.bar: bal","description":"","section":"usage","subtitle":null,"tags":["usage","salt"],"title":"Salt","type":"usage","url":"/usage/salt/"},{"content":"Leaving localhost When you want to execute saltstack states (makina-states) remotly, here is the prefligh list to do: declare your host in the database.sls Ensure that the target is reachable from ansible Bootstrap makina-states on the remote box, via the providen makinastates role Ansible wrappers specifics To use ansible, please use makina-states wrappers and never EVER the ansible original scripts directly. If you are using the database.sls, we use environment variables that are specific to makina-states and tell the ext pillar (local saltstack side) for which environment to gather information for. ANSIBLE_TARGETS : list of hosts that we will act on. this will limit the scope of the ext pillars generation thus you have to set it to speed up operations. ANSIBLE_NOLIMIT : if set, we wont limit the scope of ansible to ANSIBLE_TARGETS Examples exemple 1 ANSIBLE_TARGETS=$(hostname) bin/ansible all -m ping exemple 2 bin/ansible -c local -i \u0026quot;localhost,\u0026quot; all -m ping Examples with salt Call a state.sls run ANSIBLE_TARGETS=$(hostname) bin/ansible all -m shell \\ -a salt-call --retcode-passthrough state.sls foobar Details See: bin/ansible bin/ansible-galaxy bin/ansible-playbook bin/ansible-wrapper-common We preconfigure in our wrappers a lot of things like: Loading configuration (roles, playbooks, inventories, plugins) from: ./ ./ansible ./.ansible \u0026lt;makinastates_install_dir\u0026gt;/ansible /usr/share/ansible (depends of the opt, respects the ansible default configuration) /etc/ansible (depends of the opt, respects the ansible default configuration) When ANSIBLE_TARGETS are set, we will limit the play to them unless ANSIBLE_NOLIMIT is set. Calling makina-states\u0026rsquo;s flavored ansible from another repository As said previously, we load the current folder (and ./.ansible, ./ansible as well). This clever trick will let you can add roles and plays to a specific repository but also be able to depend on plugins or roles defined in makina-states. This mean that you ll be able to call the ansible wrapper FROM your directory where you have your specific ansible installation and the whole will assemble nicely. For example: if makina-states is installed in /srv/makina-states your project is installed inside /srv/projects/foo/project You can create your roles inside /srv/projects/foo/project/ansible/roles You can make dependencies of any makina-states roles specially makinastates_pillar that deploy the locally gather pillar for a box to the remote if you are not acting on localhost. You ll have to call ansible or ansible-playbook, do it this way: cd /srv/projects/foo/project /srv/makina-states/bin/{ansible,ansible-playbook} $args Related salt call module","description":"","section":"usage","subtitle":null,"tags":["usage","ansible"],"title":"Ansible","type":"usage","url":"/usage/ansible/"},{"content":"Organisation \u0026amp; Workflow makina-states primarely use salt to deploy on the targeted environment. Our salt states are thought to be used in a special order, and specially when you call salt via the sls: makina-states.top. We apply first the nodetype configuration. Then, we will apply the controllers configuration. Then, we will apply localsettings states After all of the previous steps, we may configure services like sshd, crond, or databases. If we are on the scratch mode, no services are configured by default. Eventually, we may by able to install projects via mc_project. A project is just a classical code repository which has a \u0026ldquo;.salt\u0026rdquo; and/or ansible playbooks/roles folder commited with enougth information on how to deploy it. Registries The configuration of any of the formulas (nodetypes, controllers, localsettings, services) is handled via Makina-States registries. History Makina-States was first using the salt HighState principle of configuring everything. was based at fist on nodetypes presets that were preselected collections of salt states to apply to the system This is from where the highstate will start to run. Recently we cutted off this behavior, and now you must apply them explicitly. Indeed: highstate tend to grow and when you decide to reapply it you may accidentaly deliver things you forgotten of. It\u0026rsquo;s long, very long to wait to reapply everything for small changes.","description":"","section":"reference","subtitle":null,"tags":["reference","installation"],"title":"Order of operation","type":"reference","url":"/reference/sumup/"},{"content":"Layout Relative to the top makina-states clone folder. bin/ansible - wrapper to ansible bin/ansible-galaxy - wrapper to ansible-galaxy bin/ansible-playbook - wrapper to ansible-playbook bin/salt-call - wrapper to salt-call ansible/ - ansible plays, roles, modules \u0026 etc etc/ - configuration etc/ansible/ - ansible configuration etc/salt/ - saltstack configuration etc/makina-states/ - makina-states configuration pillar/ - saltstack pillar files pillar/pillar.d/ - saltstack pillar files (global) pillar/private.pillar.d/ - saltstack pillar files (for the current node) pillar/.pillar.d/ - saltstack pillar files (for a specific minion) salt/makina-states/ - saltstack states makina-states/ - makina-states saltstack states salt/_modules/ - custom salt modules salt/_pillar/ - custom extpillar modules Salt pillar Saltstack configuration is based on pillars. To facilitate configuration of the Top file, we added those features: Any JSON file can be used as pillar data. Any SLS/json file dropped inside pillar/pillars.d/ will be loaded for all minion as pillar data Any SLS/json file dropped inside pillar/private.pillars.d will be only loaded for the current node of operation. Any SLS/json file dropped inside pillar/\u0026lt;$minionid\u0026gt;.pillars.d will be only loaded for the \u0026ldquo;\\$minionid\u0026rdquo; host Salt + Ansible bridge notes Makina-states has better to use an ansible dynamic inventory that bridges the salt pillar with ansible via a salt module: mc_remote_plllar. This module is pluggable and will search in the salt modules installed those who have declared special named functions: get_masterless_makinastates_hosts() return a list of host to manage get_masterless_makinastates_groups(minionid, pillar) return a list of groups for the specific minion id For each host found by all get_masterless_makinastates_hosts functions: Get its pillar by calling mc_remote_pillar.get_pillar($host) Extract/generate from informations in the pillar relevant ansible host vars for this minion. saltpillar ansible hostvar is the pillar of this minion. Generate ansible groups from those hostvars by calling eac get_masterless_makinastates_groups function By default, we use the mc_pillar ext pillar which loads a file: etc/makina-states/database.sls which describe our infractructure and this will: list all nodes that are configured as ansible targets generate pillar info for all nodes (and per se fall inside the ansible inventory of those related hosts. Custom extpillar In other words, to add your custom way of managing your hosts: Create an ext_pillar to complete the pillar for a specific minion depending on its minion id and that\u0026rsquo;s why the easiest way is to adopt a minionid/hostname naming scheme. Create a module that implement the get_masterless_makinastates_hosts \u0026amp;\u0026amp; get_masterless_makinastates_groups functions register the pillar and module to the local makina-states installation (see bellow) Take example on: module : (search for get_masterless_makinastates_groups \u0026amp;\u0026amp; get_masterless_makinastates_hosts extpillar To load your extpillar, you ll have to add it to the local salt configuration. You can add a file this way $WC/etc/salt/minion.d/99_extpillar.conf ext_pillar: - mc_pillar: {} - mc_pillar_jsons: {} - mycustompillar: {} To load your custom module, place it under $WC/salt/_modules To load your custom pillar, place it under $WC/salt/_pillar Verify the pillar for a minion Use this command: bin/salt-call mc_remote_pillar.get_pillar \u0026lt;minion_id\u0026gt; Verify the groups for a minion Use this command: bin/salt-call mc_remote_pillar.get_groups \u0026lt;minion_id\u0026gt; (OPTIONAL) Add a cron to speed up pillar generation To generate regularly the cron for all the configured minion, to speed up regular ansible calls (the pillar will already be cached at the call time), you can register a cron that does that. /etc/cron.d/refresh_ansible 15,30,45,00 * * * * root /srv/makina-states/_scripts/refresh_makinastates_pillar.sh","description":"","section":"reference","subtitle":null,"tags":["topics","ansible"],"title":"Layout","type":"reference","url":"/reference/layout/"},{"content":"For configuring a single machine, you can rely on a local crafted pillar, but to operate on a cluster, it would be a lot more cumbersome and not DRY. Fir this we created mc_pillar, an ext_pillar that describe a whole cluster infrastructure from a single SLS file. This special file, etc/makina-states/database.sls. will describe our infrastructure and specially how to access to the remote systems. At first, you will need to copy etc/makina-states/database.sls.in which is a sample, and adapt it to your needs: cp etc/makina-states/database.sls.in \\ etc/makina-states/database.sls $EDITOR etc/makina-states/database.sls The contents of the file is mostly self explainatory. This file is then parsed by the mc_pillar module (called via an extpillar hook) to get the appropriate pillar for a specific minion id. This pillar aims to grab a good part of its system configuration from backups to ssl, to cloud configuration and so on. We heavyly rely on memcached to improve the performance, so first please install it this way: bin/salt-call -lall state.sls makina-states.services.cache.memcached Verify the pillar for a minion after a database.sls change Use this command: service memcached restart bin/salt-call mc_pillar.ext_pillar \u0026lt;minion_id\u0026gt;","description":"","section":"reference","subtitle":null,"tags":["topics","minions","databasesls"],"title":"database.sls","type":"reference","url":"/reference/databasesls/"},{"content":"General use Management is done via the pillar, and via the makina-states.localsettings.ssl sls. After configuration, apply it to your system via bin/salt-call -l all state.sls makina-states.localsettings.ssl Or via ansible (remote host):: ANSIBLE_TARGETS=\u0026quot;$hostname\u0026quot; bin/ansible-playbook \\ ansible/plays/saltcall.yml -e saltargs makina-states.localsettings.ssl Add a certificate to the system trust system (add a \u0026lsquo;ca\u0026rsquo;) pillar/pillar.d/cert.sls # \u0026lt;CN\u0026gt; makina-states.localsettings.ssl.cas.foobar: | -----BEGIN CERTIFICATE----- MIIDejCCAmKgAwIBAgICA+gwDQYJKoZIhvcNAQELBQAwfzELMAkGA1UEBhMCRlIx DDAKBgNVBAgMA1BkTDEPMA0GA1UEBwwGTmFudGVzMRYwFAYDVQQKDA1NYWtpbmEg Q29ycHVzMQ8wDQYDVQQDDAZmb29iYXIxKDAmBgkqhkiG9w0BCQEWGWNvbnRhY3RA bWFraW5hLWNvcnB1cy5jb20wIBcNMTYwNTA0MTA0NjAzWhgPMzAzNDExMDQxMDQ2 MDNaMH8xCzAJBgNVBAYTAkZSMQwwCgYDVQQIDANQZEwxDzANBgNVBAcMBk5hbnRl czEWMBQGA1UECgwNTWFraW5hIENvcnB1czEPMA0GA1UEAwwGZm9vYmFyMSgwJgYJ KoZIhvcNAQkBFhljb250YWN0QG1ha2luYS1jb3JwdXMuY29tMIIBIjANBgkqhkiG 9w0BAQEFAAOCAQ8AMIIBCgKCAQEAnLYWB4f9lRVc/fbqOvOCNTCefWnNwKehyf9z LKzZ93ki5bHYLKUoI7tWK2UOKNbnADhEfgGiWNcGtdrr9wc4FFLFR43tUfIxMfqe wUcsv06V9IsmIP4Pi+knAPZG5fXystlPfLjom4bCx5mQr2SGIijw2ogYHKAIdgZJ rviDWM2XIbdEx0TIqkOAokKqUtDr8ZEG289P5v5mrHjacAC8GzhxCgg1RWmaJOhW jc6bfdgLEOQCwt3hE92r+qrh0JjxBINVLE6IO8dL1jGxN8O+U/sQdhDvuN1bwyXd 8117+FSP8C+nnOK37MI27qv0D+sEZXZXAdEAY6w0WF4EAuY/kwIDAQABMA0GCSqG SIb3DQEBCwUAA4IBAQBHG6MkNhaeXWqMqzcYmLWZQZ6hONfRaK7lZlKmly6yVzLJ Y6v5wPMtWDzE0ALS0K2nhG4dEuEo1D1/dQhdz+zmJC5+xVnCzzWIxfNs0GgQMTVj eEmp3Hl0QBwe66swFaMKPz9+1eiQKaTE4pcwOXGEFwephaJWkswX4Fw0o9CA7NLl z0uIpHB12tcGlxS7joraj6aV4nKj+T3xVzsQqR2x5jbZMzsn/1W4afeSKZkBWiNI Z1cASST8OvDiBkQna7LNqDVfogezK0h/8Wbqp5dipeNIY9xu/L4Hr9+Djb9mOwp+ gKtKM4seNltkeKgYrupAUecK3Rs+xiF9j5xiv2X1 -----END CERTIFICATE----- Add a certificate and it\u0026rsquo;s key for reuse in makina-states packages applications The certicate can be either a selfsigned certificate or a certificate and its authority chain with the certificate itself comes first pillar/pillar.d/cert.sls # \u0026lt;CN\u0026gt; makina-states.localsettings.ssl.certificates.titi: - | -----BEGIN CERTIFICATE----- MIIDdjCCAl6gAwIBAgICA+gwDQYJKoZIhvcNAQELBQAwfTELMAkGA1UEBhMCRlIx DDAKBgNVBAgMA1BkTDEPMA0GA1UEBwwGTmFudGVzMRYwFAYDVQQKDA1NYWtpbmEg Q29ycHVzMQ0wCwYDVQQDDAR0aXRpMSgwJgYJKoZIhvcNAQkBFhljb250YWN0QG1h a2luYS1jb3JwdXMuY29tMCAXDTE2MDUwNDExMDcxNFoYDzMwMzQxMTA0MTEwNzE0 WjB9MQswCQYDVQQGEwJGUjEMMAoGA1UECAwDUGRMMQ8wDQYDVQQHDAZOYW50ZXMx FjAUBgNVBAoMDU1ha2luYSBDb3JwdXMxDTALBgNVBAMMBHRpdGkxKDAmBgkqhkiG 9w0BCQEWGWNvbnRhY3RAbWFraW5hLWNvcnB1cy5jb20wggEiMA0GCSqGSIb3DQEB AQUAA4IBDwAwggEKAoIBAQDMRnNmGHdSfjmv/HQSLKQ0aIhukNQJP1/pVYfoHN/X BE5pbbM3voZaecq5puJV1CAojYO9XkIY6FFOQnXbr+285ectuTHFHGTQpw/cEtUd uHR7SXdhcfFOMw2og/WcdiOj5+WqEkm5hsT5QWMFTYQxXsRtwWVxx9JzBSStPpzi aJ1bfg51F+iEFvnkAsYN6++CAGp93pKNhKKyPx52fiiSwVH5+7Iouw5BzX68DQK5 1i2YoHDRKdZmBJ/wVUgISsuHEf4JhKMfyiQWvfjqNL5FQx4nEnhHbcrYr+/h/2+A 7IdscdbpQa4mKK6+5B9EIjR/c/6LKmXhaQuNwg+UaP9JAgMBAAEwDQYJKoZIhvcN AQELBQADggEBAB/egp+ifuMfl7tFiRiO95QeIu6YLNSs6l2ZQ8uHSqlQ6/8GSq/J C1vt/yy4nPZ/14AonlIxWiKMH/1I96Y7W/8KZ5v9DbYsjGwO1TwqNxsqxjjMlW4g Qb1L4vnAq+25HhX0M1xiJWErgPfzMCVTyOhhuVaIPVTZUBhN5GsVtzuzeC4Vpg1O wAhBRgbyi9gxdWoxeaujColAoiwBYgLt6d+jg7I7RSYvd6bIixc00G0J4zY0d8jB ztK3UXbf4G0Bt7R/DcyZ0Tsp51+d5vpD+UjKkpijwhDkUGNC1ljD5M95NlmbCPdp 5ODKbWRuHLcUyzEAjzplwC6FpAlvN11SanA= -----END CERTIFICATE----- - | -----BEGIN PRIVATE KEY----- MIIEvgIBADANBgkqhkiG9w0BAQEFAASCBKgwggSkAgEAAoIBAQDMRnNmGHdSfjmv /HQSLKQ0aIhukNQJP1/pVYfoHN/XBE5pbbM3voZaecq5puJV1CAojYO9XkIY6FFO QnXbr+285ectuTHFHGTQpw/cEtUduHR7SXdhcfFOMw2og/WcdiOj5+WqEkm5hsT5 QWMFTYQxXsRtwWVxx9JzBSStPpziaJ1bfg51F+iEFvnkAsYN6++CAGp93pKNhKKy Px52fiiSwVH5+7Iouw5BzX68DQK51i2YoHDRKdZmBJ/wVUgISsuHEf4JhKMfyiQW vfjqNL5FQx4nEnhHbcrYr+/h/2+A7IdscdbpQa4mKK6+5B9EIjR/c/6LKmXhaQuN wg+UaP9JAgMBAAECggEAPkn9RlSPjggPbyp7+k7Cg3icoZpoDanVhUEfgBfN6bLW di+NRqJCNbSNrK7GtYVJiRQd59CmNxIgOMzrQ2ISDFfOdpLSKljOJRHMND9J3RYx 7qYoUP59pmrK72fNrTgZBhHgZkvNT1VZGuhlWWiZtrQ/EXi3hkp4UbpvxKQjEqZn o2+vbUYh440dKdw+iJnDAYZi/1yFCqiIxkWUSFFky8gjppQ7CBeQ4LEHKPodd4A2 ccnK8op6H5n26Xqdi65G0SR+X39xtEGp8pzUQW8XaXCjEtBNhD95ymGXEmq8KfqG XcN6nBMVp5V9X4+xFg2hBsFPsL6xrT7deASBKyDQAQKBgQD9VG8okA1rd11ttyvi DhIV6PlYBxhEex4lGeqOrjETKMsAFDsJ5Ntlr3MiXiLIuQ0pAcu3tpxNEkTLkcHo e0VpP7HCr14vnCpZyB1B8wTmbBJVMXkD+oaN7pzEJhauaombfDRH4NKd87lgveLG 3x/CFHdJfEvxwjRdpER+ZxJvIQKBgQDObaa9WcyZnuB7R109TTiZgSGo1XljF1m9 Ji+85sSdV1mHnUyghgszbnW5s8dM1sweY5ZAONFsF9j06iL2h1XnGlgKHBL22UGH o5nc5/oHfuc/YmRdSUCgmwSZixrInFjfvUiE/tR/j4z2AnpEB4/8genEm+elXYIE uodJpj3TKQKBgQChY+FNXjiudmU3OLLkWUJ8Yug3hI2ZUzZpPJGKRL9PDXYGntzd +MctiRE4m/BdIEeaEGLQr630C+d4KWv3yFD4NHPzK/Y9LqhsemjpUwGUKtWjINmQ B1MhqRqGfB2HEKiKPh6wjDKiHlvDnjWTrSJ2asN0NZPMeYUTA0v/m3rLAQKBgFJZ K8sdp6Eg4CxNq8RoqcuS1/qiLmp5RjNOqHyTEpwx3GVdOtROpOk/h3ctYLQmfAcj cyzrfZ/BY6tQO+Jc2sf2mmhuCqKuyJVzjk2xvOyAk3+VoLQWJNHtBUi7VVPyCwI2 YFet0NeSTIlXM68v1SDGMptcFmzBgLyiLJYU21UBAoGBAIhSvpsw3z/gAf9nbciZ 9puJiPqFBld7DrCp69iaD/ryXzLfzwI3bzWR8M8TuBO6DxApiYx7Zps4QabPQZTN U4UFg0AcdRh27OUXYtGENw7W0ssZKhlII78WB+0haAwe+kQJ4aNpF0eqWXLH7thR zKKdzi+lMlG5NimeR246wBvX -----END PRIVATE KEY----- Add certificates via the makina-states ext_pillar When using mc_pillar deployment, edit your local etc/makina-states/database.sls Take an example on the database.sl ssl \u0026amp; ssl_certs section. The ssl_certs section is a mapping id / tuples of (certicate, key) mappings. certicate can be either a selfsigned certificate or a certificate and all the authority, where the certificate of the common name comes first. The ssl section is where you will map certificates described in the ssl_certs section to a particular minion id. Remember that the special default section has the purpose to map certificate to any minion You can add either certificates for a host by specifying them by the id index or configure infra wide certs by setting them in the default section. Reconfigure the SSL system bin/salt-call -lall state.sls makina-states.localsettings.ssl Use the ssl macro in a state to register a certificate This add the certificate inside the cloud ssl directories Then also may add it to the systemwide ssl trust foo.sls: {% import \u0026quot;makina-states/localsettings/ssl/macros.jinja\u0026quot; as ssl with context %} {{ ssl.install_certificate( cert_string [, [cert_key_string]] [, trust=True/False]] )}} Parameters: cert_string : either a certificate string (full certificate in PEM format) or a path to load a certificate in PEM format or a key inside the mc_ssl.settings.certificates regitry (if you need an authority chain, place the certificate first) cert_key_string (optional) : in case of cert_string is neither a certificate inline or a certificate filepath, this will lookup inside the pillar for a matching certificate inside the mc_ssl.settings.certificates key. trust (optional) : boolean to tell to register the certificate to the system-wide ssl trusted certe","description":"","section":"topics","subtitle":null,"tags":["topics","ssl"],"title":"Manage ssl certificates","type":"topics","url":"/topics/ssl/"},{"content":"We needed a way, in our salt states: To deduplicate the cumbersome handling of variables To make cleaner jinja code For a configuration knob, aggregate values from difference sources, by precedence. For this, we decided to abstract this in python code So, a registry in the Makina-States vocabulary is a salt execution module that returns a python dictionnary: containing various configuration knobs with sane defaults overridable at will by the user via the pillar or the grains. The idea was then to use dedicated saltstack execution modules to provide a way to have dictionnaries of data aggregated with this precedence: Pillar (pillar) Grains (grains) Configuration of the minion (opts) Configuration of the minion (opts[\u0026lsquo;master]\u0026rsquo;]) Exemples: mc_apache mc_nginx mc_mysql Those registries rely on an heavily used function mc_utils.default that will do the job of gathering for the configuration prefix all the knobs from the various pieces of data we want (pillar, grains, opts) Thus, by example, you want to install the mysql service: You will use this sls: bin/salt-call -linfo --retcode-passthrough \\ state.sls makina-states.servies.db.mysql Reading the code of mysql formula, you see that it calls mc_mysql To override the port (default: 3306), you can do this in the pillar or the grains those ways Flat Method(preferred): pillar/pillar.d/mysql.sls makina-states.services.db.mysql.port: 3306 Nested Method: pillar/pillar.d/mysql.sls makina-states.services.db.mysql: port: 3306","description":"","section":"reference","subtitle":null,"tags":["reference","installation"],"title":"Registries \u0026 Configuration","type":"reference","url":"/reference/registries/"},{"content":"the formula is in salt/makina-states/services/proxy/haproxy Idea We provide a facility to auto configure HTTP/HTTPS/REDIS/TCP backends via the informations that we can discover in the pillar. We want in an future iteration to add a discovery mecanism, maybe via the mine or a discovery service like ETCD, zookeeper or consul. The subkey for configuring proxies is the makina-states.haproxy_registrations top dictionnary. How to use Extra HAPROXY configuration can be done by overriding the default registry, mc_haproxy.settings via the makina-states.services.proxy.haproxy prefix as usual. Look available tweaks by calling salt-call mc_haproxy.settins. After, add makina-states.haproxy_registrations.* entries to add proxies. ssl is used and reused from makina-states.localsettings.ssl First, activate haproxy and run the states a first time: bin/salt-call -lall state.sls makina-states.services.proxy.haproxy The most complete form to register a minion with haproxy is as follow makina-states.haproxy_registrations.\u0026lt;arbitrar id\u0026gt;: - ip: \u0026lt;local_ip of backend\u0026gt; frontends: \u0026lt;frontend_port\u0026gt;: to_port: \u0026lt;dest_port\u0026gt;, # optional, default to \u0026lt;frontend_port\u0026gt; mode: \u0026lt;http/https/tcp/tcps/redis\u0026gt; hosts: [\u0026lt;hosts_to_proxy_from\u0026gt;] # only useful on http(s) mode regexes: [\u0026lt;hosts_to_proxy_from\u0026gt;] # only useful on http(s) mode wildcards: [\u0026lt;hosts_to_proxy_from\u0026gt;] # only useful on http(s) mode This declaration will help on the haproxy minion side be to configure appropriate haproxy frontends and backends objectds. Notes: The ip is the local ip of the minion to proxy requests to Frontends is a dictionnary of ports / metadata describing how to configure haproxy to proxy to the minion: mode is node the haproxy mode but a switch for us to know how to proxy requests. http/https Proxy HTTP(s) requests, depending on an additionnal regexes/wildcards/hosts knob regexes list of regexeses to match in the form [host_regex, PATH_URI_regex] wildcards list to strings which insensitive match if the header HOST endswith hosts list to strings which insensitive match exactly the header HOST tcp/tcps configure a tcp based proxy, here regexes/wildcards/hosts is useless. Be warn, main_cert will be the served ssl certicated as no SNI is possible redis configure a redis frontend (tcp based, so also no use of regexes/wildcards/hosts) based on https://support.pivotal.io/hc/en-us/articles/205309388-How-to-setup-HAProxy-and-Redis-Sentinel-for-automatic-failover-between-Redis-Master-and-Slave-servers to_port can be used to override the port to proxy on the minion side if it is not the same that on haproxy side If frontends are not specificied, you need to specify ip and one of hosts/regexes/wildcards as we default to configure http \u0026amp; https proxies. If frontends are specified, you need to respecify all of them, no default will be used in this case. 80 \u0026amp; 443 frontend port modes default to respectivly http \u0026amp; https. By default, if no frontends are specified, we setup http \u0026amp; https frontends. The SSL backend will try for forward first on 443, then on 80. Acls order for http mode is not predictible yet and will be difficult. Prefer to use a sensible configuration for your case rather than complicating the ACLS generation algorythm. proxy If we have a minion haproxy1 and want to proxy to myapp2-1 on http \u0026amp; https when a request targeting \u0026ldquo;www.super.com\u0026rdquo; arrise. all we have to do is to in haproxy1 pillar: makina-states.haproxy_registrations.haproxy1: - ip: 10.0.3.14 hosts: [www.super.com] wildcard Wilcards are also supported via the wildcards key makina-states.haproxy_registrations.haproxy1: - ip: 10.0.3.14 wildcards: [\u0026#39;*.www.super.com\u0026#39;] regex regex is also supported via the regexes key makina-states.haproxy_registrations.haproxy1: - ip: 10.0.3.14 regexes: [\u0026#39;my.*supemyappost.com\u0026#39;, \u0026#39;^/api\u0026#39;] if we want to proxy http to port \u0026ldquo;81\u0026rdquo; of myapp2-1 \u0026amp; https to 444 makina-states.haproxy_registrations.haproxy1: - ip: 10.0.3.14 hosts: [www.super.com] frontends: 80: {to_port: 81} 443: {to_port: 444} redis We have a special redis mode to do custom health checks on a redis cluster Short form if you use the default port on both ends: makina-states.haproxy_registrations.haredis: - ip: 10.0.3.14 # localip of myapp2-1 frontends: 6378: {} Long forms makina-states.haproxy_registrations.haredis: - ip: 10.0.3.14 frontends: 66378: {to_port: 666, mode: redis} 6378: {mode: redis} Redis auth is supported this way makina-states.haproxy_registrations.haredis: - ip: 10.0.3.14 frontends: 6378: {password: \u0026quot;foobar\u0026quot;, mode: redis} rabbitmq We have a special rabbitmq mode to set sane options on backend for rabbitmq Short form if you use the default port on both ends: makina-states.haproxy_registrations.haredis: - ip: 10.0.3.14 frontends: 5672: {} Long forms makina-states.haproxy_registrations.haredis: - ip: 10.0.3.14 frontends: 55672: {to_port: 333, mode: rabbitmq} 5672: {mode: rabbitmq} Register 2 backends for one same frondend makina-states.haproxy_registrations.mc_cloud_http1\u0026quot;: - \u0026quot;ip\u0026quot;: \u0026quot;10.5.5.2\u0026quot; \u0026quot;hosts\u0026quot;: [\u0026quot;es2.devhost5-1.local\u0026quot;] \u0026quot;frontends\u0026quot;: \u0026quot;80\u0026quot;: \u0026quot;mode\u0026quot;: \u0026quot;http\u0026quot; makina-states.haproxy_registrations.mc_cloud_http2\u0026quot;: - \u0026quot;ip\u0026quot;: \u0026quot;10.5.5.666\u0026quot; \u0026quot;hosts\u0026quot;: [\u0026quot;es2.devhost5-2.local\u0026quot;] \u0026quot;frontends\u0026quot;: \u0026quot;80\u0026quot;: \u0026quot;mode\u0026quot;: \u0026quot;http\u0026quot;","description":"","section":"topics","subtitle":null,"tags":["topics","cloud","haproxy"],"title":"Haproxy","type":"topics","url":"/topics/haproxy/"},{"content":"Nodetypes Your choice for nodetype is one of: scratch (default) : only manage the ansible/salt installation and configuration. You ll want to activate this mode if you want to apply explicitly your states without relying of default nodetypes configuration. server : matches a baremetal server, and manage it from end to end (base packages, network, locales, sshd, crond, logrotate, etc, by default) vm : VM (not baremetal), this is mostly like server lxccontainer : matches a lxc container mostly like server but install and fix lxc boot scripts laptop : mostly like server but also install packages for working on a developement machine (prebacking a laptop for a dev dockercontainer : matches a VM (not baremetal), this is mostly like server, but install \u0026amp; preconfigure circus to manage daemons. devhost : development machine enabling states to act on that, by example installation of a test local-loop mailer. vagrantvm : flag vagrant boxes and is a subtype of devhost You can tell boot-salt2.sh which nodetype to use via the --nodetype switch boot-salt2.sh --nodetype server --reconfigure Switching to another nodetype on an already installed environment If you installed the scratch preset and want to switch to another preset: bin/salt-call state.sls makina-states.nodetypes.\u0026lt;your_new_preset\u0026gt; If you installed a preset and want to switch to another preset: edit etc/makina-states/nodetype and put your new preset edit etc/makina-states/nodetypes.yaml and set to false your old preset Ask bootsalt to remember boot-salt2.sh --nodetype \u0026lt;your_new_preset\u0026gt; --reconfigure Finally, run: bin/salt-call state.sls makina-states.nodetypes.\u0026lt;your_new_preset\u0026gt;","description":"","section":"reference","subtitle":null,"tags":["reference","installation"],"title":"Nodetypes","type":"reference","url":"/reference/nodetypes/"},{"content":"Specification A project in corpus / makina-states is a git repository checkout which contains the code and a well known saltstack based procedure to deploy it from end to end in the .salt folder. Bear in mind the project layout You may have a look to the (now outdatedà original project_corpus specification A good sumup of the spec is as follow: There is a separate repo distributed along the project named pillar to store configuration variables, passwords and so on. Projects are deployed via instructions based on saltstack which are contained into the .salt folder along the codebase. Projects can be deployed in two modes: via a git push on a local, separated git repository where some hooks are wired to launch the deployment If no remotes, deploy the code source we have locally The deployment folder, .salt, that you will provide along your codebase will describe how to deploy your project. Deployment consist in META deployment phases: archive .salt/archive.sls: synchronnise the project to the archive folder before deploying sync code from remotes if there are remotes: if any remotes, synchronise both the pillar and the project git folder to their corresponding checkouted working copies sync/install custom salt modules (exec, states, etc) from the codebase if any from the .salt folder fixperms (.salt/fixperms.sls): enforce filesystem permissions install (.salt/install.sls): Run project deployment procedure Run any sls in the .salt folder (alphanum sorted) which: Is not a main procedure sls Is not a task (beginning with task the convention is to name them \\d\\d\\d_NAME.sls (000_prereqs.sls) fixperms (.salt/fixperms.sls): enforce filesystem permissions rollback (.salt/rollback.sls): if error, rollback procedure (by default sync from archived folder notify OPTIONNAL/LEGACY (.salt/notify.sls): after deployment, notify commands All other sls found at toplevel of the .salt folder which are not those ones are executed in lexicographical order (alphanum) and The .salt/PILLAR.sample file contains default configuration variable for your project and helps you to know what variable to override in your custom pillar. Structure This empty structure respects the aforementioned corpus reactor layout, and is just an useless helloword project which should look like: /srv/projects/\u0026lt;project_name\u0026gt; | |- pillar/init.sls: override values in PILLAR.sample and define | any other arbitrary pillar DATA. | |- data/: anything which is persisted to disk must live here | from drupal sites/default/files, python eggs, buildouts parts, | gems cache, sqlite files, static files, docroots, etc. | |- project/ \u0026lt;- a checkout or your project | |- .git | |- codebase | |- .salt | |- _modules : custom salt python exec modules | |- _states : custom salt python states modules | |- _runners : custom salt python runners modules | |- _... | | | |- PILLAR.sample | |- task_foo.sls | |- 00_deploy.sls | [ If \u0026quot;remote_less\u0026quot; is False (default) |- git/project.git: bare git repos synchronnized (bi-directional) | with project/ used by git push style deployment |- git/pillar.git: bare git repos synchronnized (bi-directional) | with pillar/ used by git push style deployment | |- arhives/ \u0026lt;- past deployment archive folders | |- \u0026lt;U-U-I-D1\u0026gt; | |- \u0026lt;U-U-I-D2\u0026gt; What you want to do is to replace the project folder by your repo. This one contains your code, as asual, plus the .salt folder. WELL Understand what is : a salt SLS, it is the nerve of the war. the Pillar of salt. be ware, on the production server the .git/config is linked with the makina-states machinery, NEVER MESS WITH ORIGIN AND MASTER BRANCH. Ensure to to have at least in your project git folder: .salt/PILLAR.sample: configuration default values to use in SLSes .salt/archive.sls: archive step .salt/fixperms.sls: fixperm step .salt/rollback.sls: rollback step You can then add as many SLSes as you want, and the ones directly in .salt will be executed in alphabetical order except the ones beginning with task_ (task_foo.sls). Indeed the ones beginning with task_ are different beasts and are intended to be either included by your other slses to factor code out or to be executed manually via the mc_project.run_task command. You can and must have a look for inspiration on project templates Deployment workflows To build and deploy your project we provide two styles of doing style that should be appropriate for most use cases. A local build workflow A distant git-push style workflow The local build workflow INITIALLY: use mc_project.init_project to create the structure to host your project use mc_project.report to verify things are in place Then Edit/Place code in the pillar folder: /srv/projects/\u0026lt;project\u0026gt;/pillar to configure the project Edit/Place code in the project folder: /srv/projects/\u0026lt;project\u0026gt;/project and manually launch the deploy Wash, Rince, Repeat The git push to prod deploy workflow INITIALLY: use mc_project.init_project to create the structure to host your project use mc_project.report to verify things are in place Then git push/or edit then push the pillar host:/srv/projects/\u0026lt;project\u0026gt;/git/pillar.git to configure the project git push/or edit then push the code inside host:/srv/projects/\u0026lt;project\u0026gt;/git/project.git which triggers the deploy Wash, Rince, Repeat Deployment stategy To sum all that up, when beginning a project you will: Initialize if not done a project structure with salt-call mc_project.init_project project If you do not want git remotes, you can alternativly use salt-call mc_project.init_project project remote_less=true add a .salt folder alongside your project codebase (in it\u0026rsquo;s git repo). deploy it, either by: In remote_less=True mode or connected to the remote host to deploy onto: edit/commit/push in host:/srv/projects/\u0026lt;project\u0026gt;/pillar edit/commit/push/push to force in host:/srv/projects/\u0026lt;project\u0026gt;/project Launch the salt-call mc_project.deploy \u0026lt;project\u0026gt; only=install,fixperms,sync_modules dance In remote_less=False mode: git push your pillar files to host:/srv/projects/\u0026lt;project\u0026gt;/git/pillar.git git push your project code to host:/srv/projects/\u0026lt;project\u0026gt;/git/project.git (this last push triggers a deploy on the remote server) Your can use --force as the deploy system only await the .salt folder. As long as the folder is present of the working copy you are sending, the deploy system will be happy. Wash, Rince, Repeat Initialize the project layout The first thing to do when deploying a project is to create a nest from it: IF IT IS NOT ALREADY DONE (just ls /srv/projects to check): bin/salt-call --local --retcode-passthrough -lall mc_project.init_project \\ \u0026lt;project_name\u0026gt; \\ [remote_less=false/true] Dont be long, and dont use - \u0026amp; _ for the project name (opt) remote_less instructs to deploy with or without the git repos that allow users to use (or not) a git push to prod to deploy workflow. If remote_less=true, the git repos wont be created, and you will have to use only the the local build workflow. If remote_less=false, you can use both the local build workflow and the git push to prod deploy workflow. --local -lall instructs to run in masterless mode and extra verbosity mc_project.init_project $project instructs to create the layout of the name $project project living into /srv/projects/$project/project Running deployment procedure The following command is the nerve of the war: bin/salt-call --local --retcode-passthrough -lall \\ mc_project.deploy $project\\ [only=step2[,step1]] \\ [only_steps=step2[,step1]] mc_project.deploy $project instructs to deploy the name $project project living into /srv/projects/$project/project (opt) only instructs to execute only the named global phases, and when deploying directly onto a machine, you will certainly have to use only=install,fixperms,sync_modules to avoid the archive/sync/rollback steps. (opt) only_steps instruct to execute only a specific or multiple specific sls from the .salt folder during the install phase. Tutorial Projects reminder tool WARNING: you can use it only if you provisionned your project with attached remotes (the default) Remember use the remotes inside /srv/projects/\u0026lt;project\u0026gt;/git and not directly the working copies If you push on the pillar, it does not trigger a deploy If you push on the project, it triggers the full deploy procedure including archive/sync/rollback. To get useful push informations, on the remote server to deploy to, just do salt-call -lall mc_project.report Using local build workflow install makinastates Initialise the layout (only the first time) ssh root@remoteserver export project=\u0026quot;foo\u0026quot; salt-call mc_project.init_project $project remote_less=true Edit the pillar cd /srv/projects/$project/pillar $EDITOR init.sls git commit -am up Add your project code export project=\u0026quot;foo\u0026quot; cd /srv/projects/$project/project # if not already done, add your project repo remote git remote add o https://github.com/o/myproject.git # in any cases, update your code git fetch --all git reset --hard remotes/o/\u0026lt;the branch to deploy\u0026gt; Run the deployment procedure, skipping archive/rollback as you are connected and live editing salt-call mc_project.deploy $project only=install,fixperms,sync_modules # or to deploy only a specific sls salt-call \\ mc_project.deploy $project \\ only=install,fixperms,sync_modules only_steps=000_foo.sls When you want to commit your changes export project=\u0026quot;foo\u0026quot; cd vms/VM/srv/$project/project git push o HEAD:\u0026lt;master\u0026gt; # replace master by the branch you want to push # back onto your forge Using git push to prod deploy workflow install makinastates The following lines edit the pillar, and push it, this does not trigger a deploy cd $WORKSPACE/myproject git clone host:/srv/projects/project/git/pillar.git $EDITOR pillar/init.sls cd pillar;git commit -am up;git push;cd .. The following lines prepare a clone of your project codebase to be able to be deployed onto production or staging servers cd $WORKSPACE/myproject git clone git@github.com/makinacorpus/myawsomeproject.git git remote add prod /srv/projects/project/git/project.git git fetch --all To trigger a remote deployment, now you can do: git push [--force] prod \u0026lt;mybranch\u0026gt;:master eg: git push [--force] prod \u0026lt;mybranch\u0026gt;:master eg: git push [--force] prod awsome_feature:master REMINDER: DONT MESS WITH THE ORIGIN REMOTE when your are connected to your server in any of the pillar or project directory.. The \u0026lt;branchname\u0026gt;:master is really important as everything in the production git repositories is wired on the master branch. You can push any branch you want from your original repository, but in production, there is only master. Configuration pillar \u0026amp; variables We provide in mc_project a powerfull mecanism to define configuration variables used in your deployments that you can safely override in the salt pillar files. This means that you can set some default values for, eg a domain name or a password, and input the production values that you won\u0026rsquo;t commit along side your project codebase. Default values have to be stored inside the PILLAR.sample file. Some of those variables, the one at the first level are mostly read only and setup by makina-states itself. The most important are: name: project name user: the system user of your project group: the system group of your project data: top level free variables mapping project_root: project root absolute path data_root: persistent folder absolute path default_env: environment (staging/prod/dev) pillar_root: absolute path to the pillar fqdn: machine FQDN The only variables that you can edit at the first level are: remote_less: is this project using git remotes for triggering deployments default_env: environement (valid values are staging/dev/prod) The other variables, members of the data sub entry are free for you to add/edit. Anything defined in the pillar pillar/init.sls overloads what is in project/.salt/PILLAR.sample. You can get and consult the result of the configuration assemblage like this: bin/salt-call --retcode-passthrough mc_project.get_configuration \u0026lt;project_name\u0026gt; Remember that projects have a name, and the pillar key to configure and overload your project configuration is based on this key. EG: If your project is name foo, you ll have to use makina-projects.foo in place of makina-projects.example. Example: in project/.salt/PILLAR.sample, you have: makina-projects.projectname: data: start_cmd: \u0026#39;myprog\u0026#39; in pillar/init.sls, you have: makina-projects.foo: data: start_cmd: \u0026#39;myprog2\u0026#39; In your states files, you can access the configuration via the magic opts.ms_project variable. In your modules or file templates, you can access the configuration via salt['mc_project.get_configuration'(name). A tip for loading the configuration from a template is doing something like that: # project/.salt/00_deploy.sls {% set cfg = opts.ms_project %} toto: file.managed: - name: \u0026quot;source://makina-projects/{{cfg.name}}/files/etc/foo\u0026quot; - target: /etc/foo - user {{cfg.user}} - group {{cfg.user}} - defaults: project: {{cfg.name}} # project/.salt/files/etc/foo {% set cfg = opts.ms_project %} My Super Template of {{cfg.name}} will run {{cfg.data.start_cmd}} SaltStack integration As you know makina-states embeds its own virtualenv and salt codebase. In makina-states, we use by default: makina states itself: /srv/makina-states a virtualenv inside /srv/makina-states/venv salt from a fork installed inside $makinastates/venv/src/salt As you see, the project layout seems not integration on those following folders, but in fact, the project initialisation routines made symlinks to integrate it which look like: $makinastates/salt/makina-projects/\u0026lt;project_name\u0026gt;\u0026gt; -\u0026gt; /srv/projects/\u0026lt;project_name\u0026gt;/project§/.salt $makinastates/pillar/pillar.d/makina-projects/\u0026lt;project_name\u0026gt; -\u0026gt; /srv/projects/\u0026lt;project_name\u0026gt;/pillar The pillar is auto included in the pillar top (/srv/pîllar/top.sls). The project salt files are not and must not be included in the salt top for further highstates unless you know what you are doing. You can unlink your project from salt with: bin/salt-call --retcode-passthrough mc_project.unlink \u0026lt;project_name\u0026gt; You can link project from salt with: bin/salt-call --retcode-passthrough mc_project.link \u0026lt;project_name\u0026gt;","description":"","section":"reference","subtitle":null,"tags":["reference","projects"],"title":"Usage","type":"reference","url":"/reference/projects/usage/"},{"content":"controllers states purpose is mainly to check: our bundled salt \u0026amp; ansible binaries are ready for operation the cloned code is up to date and clean If the user selected them: crons are in place install links (/usr/local/bin) are present RollingRealse updater crons are in place","description":"","section":"reference","subtitle":null,"tags":["reference","installation"],"title":"Controllers","type":"reference","url":"/reference/controllers/"},{"content":"LocalSettings states are related to the low level configuration of an environment EG: locales network low level configuration](hardrive, etc) installing compilers \u0026amp; language interpreters configuring base tools \u0026amp; editors configuring SSH client distributing SSL certificates If any other preset than scratch has been activated, Many localsettings will be applied by default, see mc_localsettings:registry States localsettings States non exhaustive shortcuts: State State State State localsettings/apparmor localsettings/nodejs localsettings/hostname localsettings/sudo localsettings/autoupgrade localsettings/npm localsettings/hosts localsettings/sysctl localsettings/casperjs localsettings/nscd localsettings/init.sls localsettings/systemd localsettings/check_raid localsettings/phantomjs localsettings/insserv localsettings/timezone localsettings/desktoptools localsettings/pkgs localsettings/jdk localsettings/updatedb localsettings/dns localsettings/python localsettings/ldap localsettings/users localsettings/editor localsettings/reconfigure-network localsettings/locales localsettings/vim localsettings/env localsettings/repository_dotdeb localsettings/localrc localsettings/mvn localsettings/etckeeper localsettings/rvm localsettings/grub localsettings/ssl localsettings/git localsettings/screen localsettings/golang localsettings/shell localsettings/groups localsettings/sshkeys","description":"","section":"reference","subtitle":null,"tags":["reference","installation"],"title":"LocalSettings","type":"reference","url":"/reference/localsettings/"},{"content":"Salt States dedicated to install services on the targeted environment States services States non exhaustive shortcuts: backup State State services/backup/bacula services/backup/dbsmartbackup services/backup/burp services/backup/rdiff-backup base State State services/base/cron services/base/ntp services/base/dbus services/base/ssh services/cache/memcached misc State State services/collab/etherpad services/queue/rabbitmq services/sound/mumble services/ftp/pureftpd db State State services/db/mongodb services/db/postgresql services/db/mysql services/db/redis dns State State services/dns/dhcpd services/dns/slapd services/dns/bind firewall State State services/firewall/shorewall services/firewall/firewalld services/firewall/firewall services/firewall/ms_iptables services/firewall/fail2ban services/firewall/psad gis State services/gis/postgis services/gis/ubuntugis http State State services/http/nginx services/http/apache_modfastcgi services/http/apache services/http/apache_modfcgid services/java/tomcat services/http/apache_modproxy log State State services/log/rsyslog services/log/ulogd Mail State State services/mail/dovecot services/mail/postfix Monitoring State State services/monitoring/client services/monitoring/icinga_web2 services/monitoring/icinga services/monitoring/nagvis services/monitoring/icinga2 services/monitoring/pnp4nagios services/monitoring/icinga_web services/monitoring/snmpd php State State services/php/common services/php/modphp services/php/phpfpm services/php/phpfpm_with_apache services/php/phpfpm_with_nginx Proxy State State services/proxy/haproxy services/proxy/uwsgi virt State State services/virt/docker services/virt/kvm services/virt/lxc services/virt/virtualbox","description":"","section":"reference","subtitle":null,"tags":["reference","installation"],"title":"Services","type":"reference","url":"/reference/services/"},{"content":"In makina-states services managers act on top of services to manage their lifecycle (start/stop/restart/reload). systemd circus supervisor","description":"","section":"reference","subtitle":null,"tags":["reference","installation"],"title":"Services Managers","type":"reference","url":"/reference/servicesmanagers/"},{"content":"Here are a non exhaustive list of macros that makina-states provides and you may use in your states. Helpers h(helpers) deliver_deliver_config_files: push configuration files service_restart_reload: aim to restart or reload service but handle the case where services mis implement the status function and mess salt modules. toggle_service: helper to call service_restart_reload ssl macros install_certificate install_cert_in_dir nginx nginx nginx.vhost apache apache apache.vhost php php php.fpm_pool php.minimal_index php.toggle_ext mysql macros mysql_db mysql_group postgresql macros postgresql_db postgresql_user postgresql_group install_pg_exts install_pg_ext circusd macros circusAddWatcher supervisord macros supervisorAddProgram mongodb macros mongodb_db mongodb_user uwsgi macros config rabbitmq macros rabbitmq_vhost","description":"","section":"reference","subtitle":null,"tags":["reference","installation"],"title":"Macros","type":"reference","url":"/reference/macros/"},{"content":"This page is the most important thing you ll have to read about makina-states as a developer consumer, take the time it needs and deserves. Never be afraid to go read makina-states code, it will show you how to configure and extend it. It is simple python and yaml. See python exemples: the modules (saltstack doc about modules) the states (saltstack doc about states) List of templates for inspiration Templates RFC RFC","description":"","section":"reference","subtitle":null,"tags":["reference","projects"],"title":"Projects","type":"reference","url":"/reference/projects/"},{"content":"RFC: corpus, embedded project configuration with salt ORIGINAL 2012 DOCUMENT ACCUSING ITS AGE and converted to markdown You\u0026rsquo;d better to read the Original RST Document The origin Nowoday no one of the P/I/S/aaS existing platforms fit our needs and habits. No matter of the gret quality of docker, heroku or openshift, they did\u0026rsquo;nt make it for us. And, really, those software are great, they inspired corpus a lot ! Please note also, that in the long run we certainly and surely integrate those as plain corpus drivers to install our projects on ! For exemple, this is not a critisism at all, but that\u0026rsquo;s why we were not enougth to choose one of those platforms (amongst all of the others): heroku: non free docker: Not enougth stable yet, networking integration is really a problem here. It is doable, but just complicated and out of scope. do not implement all of our needs, it is more tied to the \u0026lsquo;guest\u0026rsquo; part (see next paragraphs) But ! Will certainly replace the LXC guests driver in the near future. openshift Tied to SElinux and RedHat (we are more on the Debian front ;)). However, its great design inspired a lot of the corpus one. openstack Irrelevant and not so incompatible for the PaaS platform, but again we didn\u0026rsquo;t want locking, openstack would lock us in the first place. We want an agnostic PaaS Platform. The needs That\u0026rsquo;s why we created a set of tools to build the best flexible PaaS platform ever. That\u0026rsquo;s why we call that not a PaaS platform but a Glue PaaS Platform :=). - Indeed, what we want is more of a CloudController + ToolBox + Dashboards + API. - This one will be in charge of making projects install any kind of compute nodes running any kind of VMs smoothly and flawlessly. - Those projects will never ever be installed directly on compute nodes but rather be isolated. - They will be isolated primarly by isolation-level virtualisation systems (LXC, docker, VServer) - Bue they must also be installable on plain VMs (KVM, Xen) or directly baremetal. We don\u0026rsquo;t want any PaaS platform to suffer from some sort of lockin. We prefer a generic deployment solution that scale, and better AUTOSCALE ! All the glue making the configuration must be centralized and automatically orchestrated. This solution must not be tied to a specific tenant (baremetal, EC2) nor a guest driver type (LXC, docker, XEN). Corrolary, the low level daily tasks consists in managment of: network ( \u0026lt; OSI L3) DNS Mail operationnal supervision Security, IDS \u0026amp; Firewalling storage user management baremetal machines hybrid clouds public clouds VMs containers (vserver, LXC, docker) operationnal supervision Eventually, on top of that orchestrate projects on that infrastructure installation continenous delivery intelligent test reports, deployment reports, statistic, delivery \u0026amp; supervision dashboards backups autoscale Here is for now the pieces or technologies we use or are planning or already using to achieve all of those goals: Developer environments makina-corpus/vms + makina-corpus/makina-states + saltstack/salt Bare metal machines provision makina-states + saltstack Ubuntu server VMs (guests) Ubuntu based lxc-utils LXC containers + makina-states + saltstack DNS: makina-states + bind: local cache dns servers \u0026amp; for the moment dns master for all zones FUTURE makina-states + powerdns: dynamic managment of all DNS zones Filesystem Backup burp Database backup db_smart_backup Network: ceph, openvswitch Logs, stats: now: icinga2 / pnp for nagios future: icinga2 / logstash / kibana Mail postfix User managment (directory) Fusion directory + openldap Security at least shorewall \u0026amp; fail2ban CloudController saltstack makina-states + makina-states/mastersalt+ansible projects installation, upgrades \u0026amp; contineous delivery makina-states (mc_project, project_creation) autoscale makina-states The whole idea The basic parts of corpus PaaS platform: The cloud controller The cloud controller client applications The compute nodes Where are hosted guests Where projects run on The developer environments which are just a special kind of compute nodes The first thing we will have is a classical makina-states installation in mastersalt mode. We then will have salt cloud as a cloud controller to control compute nodes via makina-states.services.cloud.{lxc, saltify, \u0026hellip;} (lxc or saltify) Those compute nodes will install guests. Those guests will eventually run the final projects pushed by users. Hence an api and web interface to the controller we can: Add one or more ssh key to link to the host Request to link a new compute node Request to initialize a new compute node List compute nodes with their metadata (ip, dns, available slots, guest type) Get compute ndoos/container/vms base informations (ssh ip / port, username, pasword, dns names) Link more dns to the box Manage (add or free) the local storage. Destroy a container Unlink a compute node The users will just have either: - Push the new code to deploy - Connect via ssh to do extra manual stuff if any including a manual deployment Permission accesses We will use an ldap server to perform authentication The different environment platforms We also want to distinguish at least those 3 environments, so 3 ways for you to deploy at least. dev: The developper environments (laptop) staging: the stagings and any other QA platform prod: the production platform Objectives The layout and projects implementation must allow us to Automaticly rollback any unsucessful deployment In production and staging, archive application content from N last deployments Make the development environment easily editable Make the staging environment a production battletest server Production can deploy from non complex builds, and the less possible dependant of external services For this, we inspired ouselves a lot from openshift and heroku (custom buildpacks) models. Actual layout Overview of the project source code repositories A project will have at least 2 local git repositories: /srv/projects/myproject/git/project.git/ A repository where lives its sourcecode and deployment recipes /srv/projects/myproject/git/pillar.git/ A repository where lives its pillar This repository master branch consequently has the minimal following structure: master |- what/ever/files/you/want |- .salt -\u0026gt; the salt deployment structure |- .salt/PILLAR.sample -\u0026gt; default pillar used in the project, this | file will be loaded inside your | configuration |- .salt/rollback.sls -\u0026gt; rollback code run in case of problems |- .salt/archive.sls -\u0026gt; pre save code which is run upon a deploy | trigger |- .salt/fixperms.sls -\u0026gt; reset permissions script run at the end of | deployment |- .salt/_modules -\u0026gt; custom salt modules to add to local salt | /_runners install | /_outputters | /_states | /_pillars | /_renderers | |- .salt/00_DEPLOYMENT.sls -\u0026gt; all other slses will be executed in order and are to be provided by th users. A private repository with restricted access with any configuration data needed to deploy the application on the PAAS platform. This is in our case the project pillar tree: pillar master |- init.sls the pillar configuration As anyways, you ll push changes to the PAAS platform, no matter what you push, the PAAS platform will construct according to the pushed code :). So you can even git push -f if you want to force things. Overview of the paas local directories /srv/projects/myproject/project/ The local clone of the project branch from where we run in all modes. In other words, this is where the application runtimes files are. In application speaking django/python ala pip: the virtualenv \u0026amp; root of runtime generated configuration files zope: this will the root where the bin/instance will be lauched and where the buildout.cfg is php webapps: this will be your document root + all resources nodejs: etc, this will be where nginx search for static files and where the nodejs app resides. /srv/projects/myproject/pillar The project specific states pillar tree local clone. /srv/projects/myproject/data/ Where must live any persistent data /srv/pillar/makina-projects/myproject -\u0026gt; /srv/projects/myproject/pillar pillar symlink for salt integration /srv/salt/makina-projects/myproject -\u0026gt; /srv/projects/myproject/.salt/\u0026lt;env\u0026gt; state tree project symlink for salt integration /srv/salt/{_modules,runners,outputters,states,pilalrs,renderers}/*py -\u0026gt; /srv/projects/myproject/.salt/\u0026lt;typ\u0026gt;/mod.py custom salt python execution modules The deployment procedure is as simple a running meta slses which in turn call your project ones contained in a subfolder of the .salt directory during the install phase. The .salt directory will contain SLSs executed in lexicographical order. You will have to take exemple on another projects inside makina-states/projects or write your states. Those slses are in charge to install your project. The persistent configuration directories /etc static global configuration (/etc) the persistent data directories: If you want to deploy something inside, make a new archive in the release directory with a dump or a copy of one of those files/directories. /var : Global data directories (data \u0026amp; logs) (/var) Minus the package manager cache related directories /srv/projects/project/data : Specific application datas (/srv/projects/project/data) - Datafs and logs in zope world - drupal thumbnails - mongodb documentroot - ... Networkly speaking, to enable switch of one container to another we have some solutions but in any case, no ports must be directly wired to the container. Never EVER. Either: Make the host receive the inbound traffic data and redirect (NAT) it to the underlying container Make a proxy container receive all dedicated traffic and then this specific container will redirect the traffic to the real underlying production container. Procedures Those procedure will be implemented by either: Manual user operations or commands Git hooks salt execution modules jinja macros (collection of saltstack states) All procedures are tied to a default sls inside the .salt project folder and can per se be overriden. Project initialization/sync procedure Initiate the project specific user Initiate the ssh keys if any Initiate the pillar and project bare git repositories inside the git folder Clone local copies inside the project, pillar and salt directories If the salt folder does not exists, create it If any of default slses procedures are not yet present, create them Wire the pillar configuration inside the pillar root Wire the pillar init.sls file to the global pillar top file Wire the salt configuration inside the salt root Echo the git remotes to push the new deployement on. Wire any salt modules in .salt/{_modules,runners,etc} Project archive procedure If size is low, we enlarge the container run the pre archive hooks archive the project directory in an archive/deployed subdirectory run the post archive hooks (make extra dumps or persistent data copies) run the archives rotation job Project Release-sync procedure Be sure to sync the last git deploy hook from makina-states Fetch the last commits inside the deploy directory Project install procedure We run all slses in the project .salt directory which is not tied to any default procedure. Project fixperms procedure Set \u0026amp; reset (enforce) needed user accesses to the filesystem Rollback procedure Only run if something have gone wrong We move the failed project directory in the deployment archives/\u0026lt;UUID\u0026gt;/project.failed sub directory We sync back the previous deployment code to the project directory We execute the rollback hook (user can input database dumps reload) Workflows Full procedure project deployment is triggered project archive procedure project initialization/sync procedure project release-sync procedure project fixperms procedure project install procedure project fixperms procedure (yes again) In error: rollback procedure IMPLEMENTATION: How a project is built and deployed For now, at makinacorpus, we think this way: Installing somewhere a mastersalt master controlling compute nodes and only accessible by ops. Installing elsewhere at least one compute node which will receive project nodes (containers): linked to this mastersalt as a mastersalt minion a salt minion linked to a salt master which is probably local and controlled by project members aka devs, by default these salt minion and salt master services are toggled off and the salt-call should be runned masterless (salt-call \u0026ndash;local) Initialisation of a cloud controller Complex, contact @makinacorpus. This incude: Setting up the dns master \u0026amp; slaves for the cloud controlled zone. Setting up the cloud database Setting up at least one compute node to deploy projects Deploying vms Request of a compute node or a container Edit the mastersalt database file to include your compute node and vms configuration. Run any appropriate mastersalt runners to deploy \u0026amp; operate your compute nodes and vms Initialisation of a compute node This will in order: auth user check infos to attach a node via salt cloud Register DNS in the dns master for thie compute node and its related vms generate a new ssh key pair install the guest_type base system (eg: makina-states.services.virt.lxc) Generate root credentials and store them in grains on mastersalt Configure the basic container pillar on mastersalt root credentials dns firewall rules defaultenv (dev, prod, preprod) compute mode override if any (default_env inside /srv/salt/custom.sls) Run the mastersalt highstate. Initialisation of a project - container environment This will in order: - auth user - Create a new container on endpoint with those root credentials - Create the layout - use the desired salt cloud driver to attach the distant host as a new minion - install the key pair to access the box as root - Generate root credentials and store them in grains on mastersalt - Configure the basic container pillar on mastersalt - root credentials - dns - firewall rules - Run the mastersalt highstate Initialisation of a project We run the initalization/sync project procedure Send a mail to sysadmins, or a bot, and initial igniter with the infos of the new platform access basic http \u0026amp; https url access ssh accces root credentials User create the project Project directories are initialised upgrade of a project The code is not pull by production server it will be pushed with git to the environment ssh endpoint: Triggered either by an automatted bot (jenkins) By the user itself, hence he as enougth access In either way, the trigger is a git push. The nerve of the war: jinja macros and states, and execution modules Project states writing is done by layering a set of saltstacl sls files in a certain order. Those will ensure an automatic deployment from end to end. The salt states and macros will abuse of execution modules to gather informations but also act on the underlying system. The project common data structure Overview to factorize the configuration code but also keep track of specific settings, those macros will use a common data mapping structure which is good to store defaults but override in a common manner variables via pillar. all those macros will take as input this configuration data structure which is a mapping containing all variables and metadata about your project. this common data mapping is not copied over but passed always as a reference, this mean that you can change settings in a macro and see those changes in later macros. The project configuration registry execution module helper The base execution module used for project management is module_mc_project It will call under the hood the latest API version of the mc_project module. eg: mc_project_2.* This will define methods for: Crafting the base configuration data structure initialising the project filesystem layout, pillar and downloading the base sourcecode for deployment (salt branch) deploying and upgrading an already installed project. Setting a project configuration If there are too many changes in a project layout, obviously a new project API module should be created and registered for the others to keep stability. APIV2 The project execution module interface (APIV2) Note that there two parts in the module: One set of methods are the one you are most likely to use handle local deployment One another set of methods is able to handle remote deployments over ssh. The only requirement for the other host is that makina-states should be installed first and ssh access should be configured previously to any deploy call. The requirement was to have only a basic ssh access, that why we did not go for a RAET or 0Mq salt deployment structure here. See module_mc_project_2 The project sls interface (APIV2) Each project must define a set of common sls which will be the interfaced and orchestred by the project execution module. Theses sls follow the aforementionned procedures style. The important thing to now remember is that those special sls files cannot be run without the project runner execution module Indeed, we inject in those sls contextes a special cfg variable which is the project configuration and without we can\u0026rsquo;t deploy correctly. We have two sets of sls to consider The set of sls providen by a makina-states installer: this is specified at project creation and stored in configuration for further reference The set of sls providen by the project itself in the .salt directory this is where the user will customize it\u0026rsquo;s deployment steps. The installer set is then included by default at the first generation of the user installer set at the creation of the project. Project initialisation \u0026amp; installation Refer to project_creation Some installers example: projects_project_list","description":"","section":"reference","subtitle":null,"tags":["reference","paas","projects"],"title":"Projects RFC","type":"reference","url":"/reference/projects/RFC/"},{"content":"Those projects can be used as-is or as a kickstarter for your own projects based on makina-states either by copying or inspiring yourself to create your own Project skeletons compliant with makina-states corpus-drupal corpus-php corpus-django corpus-zope-plone corpus-staticwww databases corpus-solr corpus-mysql corpus-elasticsearch corpus-mongodb corpus-rabbitmq corpus-pgsql webapps corpus-seafile corpus-redmine corpus-vaultier corpus-fusiondirectory corpus-sabnzbd corpus-legacyenketo corpus-odoo corpus-piwik Osm corpus-osmdb corpus-osmrender corpus-tilemill Code \u0026amp; CI corpus-gitlab corpus-jenkins corpus-jenkins-slave corpus-svn corpus-gitlabrunner Misc corpus-mumble corpus-irssi Mail corpus-mailman3","description":"","section":"reference","subtitle":null,"tags":["reference","projects","templates"],"title":"Templates","type":"reference","url":"/reference/templates/"},{"content":"","description":"","section":"","subtitle":null,"tags":[],"title":"Ansible","type":"page","url":"/tags/ansible/"},{"content":"","description":"","section":"","subtitle":null,"tags":[],"title":"Cloud","type":"page","url":"/tags/cloud/"},{"content":"LXC","description":"","section":"cloud","subtitle":null,"tags":["cloud","lxc"],"title":"Cloud Documentation","type":"cloud","url":"/cloud/"},{"content":"","description":"","section":"","subtitle":null,"tags":[],"title":"Databasesls","type":"page","url":"/tags/databasesls/"},{"content":"","description":"","section":"","subtitle":null,"tags":[],"title":"Haproxy","type":"page","url":"/tags/haproxy/"},{"content":"","description":"","section":"","subtitle":null,"tags":[],"title":"Install","type":"page","url":"/tags/install/"},{"content":"","description":"","section":"","subtitle":null,"tags":[],"title":"Installation","type":"page","url":"/tags/installation/"},{"content":"makinastates lxc container integration consists in: spawning lxc containers including static ip allocation and port mapping providing NAT, DHCP \u0026amp; DNS (dnsmasq \u0026amp; makinastates magicbridge) configuring reverse proxies on the baremetal using: a frontal haproxy proxy incoming https(s) requests iptable to reverse proxy ssh requests WARNING currently only those backing store are supported/tested: dir, overlayfs. We name: $controller : the station from where we will operate to controll other resources $cn : the compute node where we will spawn LXC containers $vm : the LXC container to spawn $vm_tmpl : the name of the container to clone from Preliminary configuration Go into makina-states folder: cd /srv/makina-states Edit database.sls, specially ips, vms, \u0026amp; cloud_vm_attrs: $EDITOR etc/makina-states/database.sls ips should container the ips for $controller (usually localhost), and the $cn ips: mycontainer.foo.loc: 1.2.3.4 vms should contain a reference to indicate where we will spawn your container vms: lxc: mycontainer.foo.loc: # \u0026lt;- baremetal - foocontainer.truc.foo # \u0026lt;- container cloud_vm_attrs should certainly contain domains to proxy http requests to underlyiung containers cloud_vm_attrs: foocontainer.truc.foo: domains: superapp.truc.foo Define shell variable to copy/paste following commands: export controller=\u0026quot;$(hostname -f)\u0026quot; export cn=\u0026quot;$(hostname -f)\u0026quot; # export cn=\u0026quot;c.foo.net\u0026quot; export vm=\u0026quot;d.foo.net\u0026quot; export vm_tmpl=\u0026quot;makinastates\u0026quot; Preparation on the controller Refresh cache: # bin/salt-call -lall \\ # state.sls makina-states.services.cache.memcached # the first time service memcached restart _scripts/refresh_makinastates_pillar.sh # or with limit on hosts that the run will be ANSIBLE_TARGETS=\u0026quot;$controller,$vm,$cn\u0026quot; _scripts/refresh_makinastates_pillar.sh Preinstalling makina-states (controller \u0026amp; each compute node) Installing it on controller: ANSIBLE_TARGETS=\u0026quot;$controller\u0026quot; bin/ansible-playbook \\ ansible/plays/makinastates/install.yml Installing it on compute nodes: ANSIBLE_TARGETS=\u0026quot;$cn\u0026quot; bin/ansible-playbook \\ ansible/plays/makinastates/install.yml Configure the dns on a full makina-states infra with mc_pillar ANSIBLE_TARGETS=\u0026quot;$DNS_MASTER\u0026quot; bin/ansible-playbook \\ ansible/plays/cloud/controller.yml Compute node related part Configure compute_node with: ANSIBLE_TARGETS=\u0026quot;$cn\u0026quot; ansible-playbook \\ ansible/plays/cloud/compute_node.yml Cooking and delivery of container / container templates Initialise a lxc container that will be the base of our image (after creation go edit in it until sastified of the result): ANSIBLE_TARGETS=\u0026quot;$controller,lxc$vm_tmpl\u0026quot; bin/ansible-playbook \\ ansible/plays/cloud/create_container.yml Synchronise it to an offline image, this will copy the container to the image, and remove parts from it (like sshkeys) to impersonate it: ANSIBLE_TARGETS=\u0026quot;$controller\u0026quot; bin/ansible-playbook \\ ansible/plays/cloud/snapshot_container.yml -e \u0026quot;lxc_template=$vm_tmpllxc_container_name=lxc$vm_tmpl\u0026quot; Arguments: ANSIBLE_TARGETS : compute node where the container resides (must be in ansible inventory) lxc_template : lxc image to create lxc_container_name : lxc container which serve as a base for the image Transfer the template to the compute node where you want to spawn containers from that image: ANSIBLE_TARGETS=\u0026quot;$cn\u0026quot; ansible-playbook \\ ansible/plays/cloud/sync_container.yml \\ -e \u0026quot;lxc_orig_host=$controllerlxc_container_name=$vm_tmpl\u0026quot; Arguments: ANSIBLE_TARGETS : both orig and dest lxc_host : where to transfer container/template lxc_orig_host : where from transfer container/template lxc_container_name : lxc container to transfer lxc template creation release=vivid # choose your distrib here lxc-create -t ubuntu -n ${release} lxc-clone -n lxcmakinastates${release} -o ${release} -- -r ${release} lxc-attach -n lxcmakinastates${release} apt-get install git python ca-certificates git clone https://github.com/makinacorpus/makina-states.git /srv/makina-states /srv/makina-states/_scripts/boot-salt.sh -C -n lxccontainer --highstates || ( \\ \u0026amp;\u0026amp; \\ . /srv/makina-states/venv/bin/activate \u0026amp;\u0026amp;\\ pip install --upgrade pip \u0026amp;\u0026amp;\\ deactivate \u0026amp;\u0026amp;\\ /srv/makina-states/_scripts/boot-salt.sh -C --highstates) # any additionnal stuff to complete the image # cmd1 Initialise a container Initialise and finish the container provisioning (from scratch) ANSIBLE_TARGETS=\u0026quot;$cn,$vm\u0026quot; bin/ansible-playbook \\ ansible/plays/cloud/create_container.yml Arguments: ANSIBLE_TARGETS : compute node where the container resides (must be in ansible inventary) \u0026amp; lxc container to create lxc_from_container : lxc container from which initing the container lxc_backing_store : (opt) backing store to use Initialise and finish the container provisioning (from template): ANSIBLE_TARGETS=\u0026quot;$cn,vm\u0026quot; bin/ansible-playbook \\ ansible/plays/cloud/create_container.yml -e \u0026quot;lxc_from_container=$vm_tmpl\u0026quot; Special case: use overlayfs to create the container: ANSIBLE_TARGETS=\u0026quot;$cn,$vm\u0026quot; bin/ansible-playbook \\ ansible/plays/cloud/create_container.yml \\ -e \u0026quot;lxc_from_container=$vm_tmpllxc_backing_store=overlayfs\u0026quot;","description":"","section":"cloud","subtitle":null,"tags":["cloud","lxc"],"title":"LXC containers management","type":"cloud","url":"/cloud/lxc/"},{"content":"","description":"","section":"","subtitle":null,"tags":[],"title":"Lxc","type":"page","url":"/tags/lxc/"},{"content":"","description":"","section":"","subtitle":null,"tags":[],"title":"Minions","type":"page","url":"/tags/minions/"},{"content":"","description":"","section":"","subtitle":null,"tags":[],"title":"Paas","type":"page","url":"/tags/paas/"},{"content":"Briefing MakinaStates at a whole is a combination of ansible and salt aiming at operating a cluster from baremetal to projects delivery. Note that makina-states do not use regular salt daemons(minion/master) to operate remotely but an ansible bridge that copy the pillar and use salt-call locally directly on the remote box via SSH Ansible get the information from makinastates by getting the salt pillar for a particular through a custom dynamic inventory. Compatibility For now, you will have to use Ubuntu \u0026gt;= 14.04. Makina-States can be ported to any linux based OS, but we, here, use ubuntu server and this is the only supported system for now. It can be used in any flavor: lxc, docker, baremetal, kvm, etc.","description":"","section":"reference","subtitle":null,"tags":["topics","installation"],"title":"Preamble","type":"reference","url":"/reference/preamble/"},{"content":"","description":"","section":"","subtitle":null,"tags":[],"title":"Projects","type":"page","url":"/tags/projects/"},{"content":"","description":"","section":"","subtitle":null,"tags":[],"title":"Reference","type":"page","url":"/tags/reference/"},{"content":"See here","description":"","section":"reference","subtitle":null,"tags":["topics","installation"],"title":"Reference","type":"reference","url":"/reference/"},{"content":"","description":"","section":"","subtitle":null,"tags":[],"title":"Salt","type":"page","url":"/tags/salt/"},{"content":"","description":"","section":"","subtitle":null,"tags":[],"title":"Ssl","type":"page","url":"/tags/ssl/"},{"content":"","description":"","section":"","subtitle":null,"tags":[],"title":"Tags","type":"page","url":"/tags/"},{"content":"","description":"","section":"","subtitle":null,"tags":[],"title":"Templates","type":"page","url":"/tags/templates/"},{"content":"","description":"","section":"","subtitle":null,"tags":[],"title":"Topics","type":"page","url":"/tags/topics/"},{"content":"haproxy SSL","description":"","section":"topics","subtitle":null,"tags":["topics"],"title":"Topics","type":"topics","url":"/topics/"},{"content":"","description":"","section":"","subtitle":null,"tags":[],"title":"Upgrade","type":"page","url":"/tags/upgrade/"},{"content":"","description":"","section":"","subtitle":null,"tags":[],"title":"Usage","type":"page","url":"/tags/usage/"},{"content":"Switch makina-states branch To switch on a makina-states branch, like the v2 branch in production: bin/boot-salt2.sh -b v2 Update makina-states local copy To sync makinastates code bin/boot-salt2.sh -C -S Running makinastates in highstate mode To install all enabled makina-states services after having configured your pillar up bin/boot-salt2.sh -n laptop|server|lxccontainer|vm -C \u0026amp;\u0026amp; \\ bin/salt-call --retcode-passthrough state.sls makina-states.top Upgrade Upgrade will: Run predefined \u0026amp; scheduled upgrade code Update makina-states repositories in /srv/salt \u0026amp; /srv/makina-states Update core repositories (like salt code source in /srv/makina-states/src/salt) Do the highstates (salt and masterone if any) bin/boot-salt2.sh -C --S \u0026amp;\u0026amp; \\ bin/salt-call --retcode-passthrough state.sls makina-states.top use ansible \u0026amp; salt Salt Ansible","description":"","section":"usage","subtitle":null,"tags":["topics","installation"],"title":"Usage","type":"usage","url":"/usage/"}]