This is working version with completed documentation
This commit is contained in:
parent
103eed2982
commit
a3d0ec041b
2
.gitignore
vendored
2
.gitignore
vendored
@ -1,3 +1,3 @@
|
||||
# ---> Ansible
|
||||
*.retry
|
||||
|
||||
*.log
|
||||
|
||||
339
README.md
339
README.md
@ -1,3 +1,338 @@
|
||||
# okdv3
|
||||
# OKD 3.11 Vagrant Development Environment
|
||||
|
||||
Creating OpenShift cluster in single node using Vagrant.
|
||||
A comprehensive Vagrant-based development environment for OKD (OpenShift Origin) 3.11, providing a multi-node cluster setup with authentication, persistent volumes, service registry, and S2I examples.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [Overview](#overview)
|
||||
- [Architecture](#architecture)
|
||||
- [Prerequisites](#prerequisites)
|
||||
- [Installation](#installation)
|
||||
- [Configuration Options](#configuration-options)
|
||||
- [Post-Installation Setup](#post-installation-setup)
|
||||
- [Examples and Use Cases](#examples-and-use-cases)
|
||||
- [Troubleshooting](#troubleshooting)
|
||||
- [Advanced Configuration](#advanced-configuration)
|
||||
|
||||
## Overview
|
||||
|
||||
This project creates a complete OKD 3.11 cluster using Vagrant with the following features:
|
||||
|
||||
- **Multi-node setup**: Master, worker nodes, and dedicated storage/services node
|
||||
- **Automated provisioning**: Full cluster deployment with Ansible playbooks
|
||||
- **Multiple deployment options**: Full cluster, all-in-one, or custom configurations
|
||||
- **Comprehensive examples**: Authentication, persistent volumes, S2I builds, and more
|
||||
- **Production-ready**: HAProxy load balancer configuration included
|
||||
|
||||
## Architecture
|
||||
|
||||
### Full Cluster Setup (Default)
|
||||
|
||||
| Machine | Address | Memory | CPUs | Roles |
|
||||
|---------------------|---------------|--------|------|--------------------------|
|
||||
| okd.example.com | 172.27.11.10 | 8GB | 4 | master, infra, etcd |
|
||||
| node1.example.com | 172.27.11.20 | 4GB | 2 | compute node |
|
||||
| node2.example.com | 172.27.11.30 | 4GB | 2 | compute node |
|
||||
| extras.example.com | 172.27.11.40 | 256MB | 1 | storage (NFS), LDAP |
|
||||
|
||||
**Total Resources Required**: ~16.25GB RAM, 9 CPU cores
|
||||
|
||||
### Network Configuration
|
||||
|
||||
- **Private Network**: 172.27.11.0/24
|
||||
- **Public Access**: Through nip.io wildcard DNS (*.172-27-11-10.nip.io)
|
||||
- **Load Balancer**: HAProxy configuration provided for production use
|
||||
|
||||
## Prerequisites
|
||||
|
||||
### Software Requirements
|
||||
|
||||
- **Vagrant** (latest version)
|
||||
- **VirtualBox** or **libvirt** (KVM)
|
||||
- **Minimum 16GB RAM** available for full setup
|
||||
- **~50GB disk space** for all VMs
|
||||
|
||||
### Supported Platforms
|
||||
|
||||
- Linux (recommended)
|
||||
- macOS
|
||||
- Windows (with some limitations)
|
||||
|
||||
## Installation
|
||||
|
||||
### Quick Start - Full Cluster
|
||||
|
||||
```bash
|
||||
git clone <repository-url>
|
||||
cd okdv3
|
||||
vagrant up
|
||||
```
|
||||
|
||||
The installation process will:
|
||||
1. Create and configure 4 virtual machines
|
||||
2. Install required packages and dependencies
|
||||
3. Run Ansible playbooks for OKD installation:
|
||||
- `/root/openshift-ansible/playbooks/prerequisites.yml`
|
||||
- `/root/openshift-ansible/playbooks/deploy_cluster.yml`
|
||||
|
||||
**⏱️ Installation Time**: 60-90 minutes depending on hardware
|
||||
|
||||
### Low Memory Setup (8GB or less)
|
||||
|
||||
For systems with limited memory, use the all-in-one configuration:
|
||||
|
||||
```bash
|
||||
git clone <repository-url>
|
||||
cd okdv3
|
||||
mv Vagrantfile Vagrantfile.full
|
||||
mv Vagrantfile.allinone Vagrantfile
|
||||
vagrant up
|
||||
```
|
||||
|
||||
This creates only 2 VMs:
|
||||
- **Master**: 4GB RAM (all services)
|
||||
- **Extras**: 256MB RAM (NFS + LDAP)
|
||||
|
||||
## Configuration Options
|
||||
|
||||
### Vagrant Provider Support
|
||||
|
||||
The project supports both VirtualBox and libvirt providers:
|
||||
|
||||
```ruby
|
||||
# VirtualBox (default)
|
||||
vagrant up
|
||||
|
||||
# libvirt/KVM
|
||||
vagrant up --provider=libvirt
|
||||
```
|
||||
|
||||
### Memory Optimization
|
||||
|
||||
To reduce memory usage, edit the Ansible inventory (`files/hosts`) and disable metrics:
|
||||
|
||||
```ini
|
||||
openshift_metrics_install_metrics=false
|
||||
```
|
||||
|
||||
This reduces master memory requirements from 8GB to ~2GB.
|
||||
|
||||
### Disabled Services (for performance)
|
||||
|
||||
The following services are disabled by default:
|
||||
- `openshift_logging_install_logging=false`
|
||||
- `openshift_enable_olm=false`
|
||||
- `openshift_enable_service_catalog=false`
|
||||
- `openshift_cluster_monitoring_operator_install=false`
|
||||
|
||||
## Post-Installation Setup
|
||||
|
||||
### 1. Access the Web Console
|
||||
|
||||
Add the hostname to your system's hosts file:
|
||||
|
||||
**Linux/macOS**:
|
||||
```bash
|
||||
echo '172.27.11.10 okd.example.com' | sudo tee -a /etc/hosts
|
||||
```
|
||||
|
||||
**Windows**:
|
||||
Edit `C:\Windows\System32\drivers\etc\hosts` and add:
|
||||
```
|
||||
172.27.11.10 okd.example.com
|
||||
```
|
||||
|
||||
### 2. Login Credentials
|
||||
|
||||
- **Web Console**: https://okd.example.com:8443
|
||||
- **Username**: `developer`
|
||||
- **Password**: `4linux`
|
||||
|
||||
### 3. Accept SSL Certificates
|
||||
|
||||
Visit and accept the self-signed certificate for metrics:
|
||||
- https://hawkular-metrics.172-27-11-10.nip.io
|
||||
|
||||
### 4. CLI Access
|
||||
|
||||
SSH into the master node:
|
||||
```bash
|
||||
vagrant ssh master
|
||||
oc login -u developer -p 4linux
|
||||
```
|
||||
|
||||
## Examples and Use Cases
|
||||
|
||||
### 1. Authentication (examples/authentication/)
|
||||
|
||||
Configure different authentication methods:
|
||||
- **HTPasswd**: File-based authentication
|
||||
- **LDAP**: Directory-based authentication (pre-configured)
|
||||
|
||||
Example HTPasswd setup:
|
||||
```bash
|
||||
vagrant ssh master
|
||||
sudo htpasswd -bc /etc/origin/master/htpasswd myuser mypassword
|
||||
# Update master-config.yaml and restart services
|
||||
```
|
||||
|
||||
### 2. Persistent Volumes (examples/persistent-volumes/)
|
||||
|
||||
Pre-configured NFS storage on the extras node:
|
||||
|
||||
```bash
|
||||
# Create NFS exports on extras node
|
||||
vagrant ssh extras
|
||||
sudo mkdir -p /srv/nfs/v{0,1,2,3,4}
|
||||
sudo chmod 0700 /srv/nfs/v{0,1,2,3,4}
|
||||
sudo chown nfsnobody: /srv/nfs/v{0,1,2,3,4}
|
||||
|
||||
# Enable SELinux for NFS on all nodes
|
||||
sudo setsebool -P virt_use_nfs 1
|
||||
```
|
||||
|
||||
Deploy persistent volumes:
|
||||
```bash
|
||||
vagrant ssh master
|
||||
oc create -f /vagrant/examples/persistent-volumes/nfs-pv.yml
|
||||
oc create -f /vagrant/examples/persistent-volumes/cache-pvc.yml
|
||||
```
|
||||
|
||||
### 3. Container Registry (examples/registry/)
|
||||
|
||||
Set up and configure the integrated container registry for storing images.
|
||||
|
||||
### 4. Source-to-Image (S2I) (examples/s2i/)
|
||||
|
||||
Custom S2I builder for lighttpd web server:
|
||||
- Custom Dockerfile
|
||||
- Build and run scripts
|
||||
- Example application template
|
||||
|
||||
Deploy the S2I example:
|
||||
```bash
|
||||
vagrant ssh master
|
||||
oc create -f /vagrant/examples/template/lighttpd.yml
|
||||
oc new-app lighttpd-s2i
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
#### 1. Insufficient Memory
|
||||
**Error**: VM fails to start or OKD pods crash
|
||||
**Solution**:
|
||||
- Use all-in-one configuration
|
||||
- Disable metrics: Set `openshift_metrics_install_metrics=false`
|
||||
|
||||
#### 2. Network Issues
|
||||
**Error**: Cannot access web console
|
||||
**Solution**:
|
||||
- Verify hosts file entry
|
||||
- Check VM network: `vagrant ssh master -c "ip addr show"`
|
||||
- Ensure no firewall blocking ports 8443, 80, 443
|
||||
|
||||
#### 3. Certificate Issues
|
||||
**Error**: SSL certificate warnings
|
||||
**Solution**:
|
||||
- Accept self-signed certificates in browser
|
||||
- For CLI: `oc login --insecure-skip-tls-verify=true`
|
||||
|
||||
#### 4. Storage Issues
|
||||
**Error**: PVCs stuck in pending
|
||||
**Solution**:
|
||||
```bash
|
||||
# Check NFS service on extras node
|
||||
vagrant ssh extras
|
||||
sudo systemctl status nfs-server
|
||||
|
||||
# Verify exports
|
||||
sudo exportfs -v
|
||||
```
|
||||
|
||||
### Debugging Commands
|
||||
|
||||
```bash
|
||||
# Check cluster status
|
||||
vagrant ssh master
|
||||
oc get nodes
|
||||
oc get pods --all-namespaces
|
||||
|
||||
# Check services
|
||||
sudo systemctl status origin-master-api
|
||||
sudo systemctl status origin-master-controllers
|
||||
|
||||
# View logs
|
||||
sudo journalctl -u origin-master-api -f
|
||||
sudo journalctl -u origin-master-controllers -f
|
||||
```
|
||||
|
||||
## Advanced Configuration
|
||||
|
||||
### Custom Ansible Inventory
|
||||
|
||||
The main configuration is in `files/hosts`. Key sections:
|
||||
|
||||
```ini
|
||||
[OSEv3:vars]
|
||||
# Authentication
|
||||
openshift_master_identity_providers=[{'name': 'HTPASSWD', 'challenge': 'true', 'login': 'true', 'kind':'HTPasswdPasswordIdentityProvider', 'mappingMethod': 'claim'}]
|
||||
|
||||
# Networking
|
||||
openshift_master_default_subdomain='172-27-11-10.nip.io'
|
||||
|
||||
# Docker configuration
|
||||
openshift_docker_options='--selinux-enabled --log-driver=journald --signature-verification=false --insecure-registry=172.30.0.0/16 --exec-opt native.cgroupdriver=systemd'
|
||||
|
||||
# Disable checks (for development)
|
||||
openshift_disable_check='disk_availability,memory_availability,docker_storage,package_availability,docker_image_availability,package_version'
|
||||
```
|
||||
|
||||
### HAProxy Configuration
|
||||
|
||||
Production-ready HAProxy configuration is provided in `haproxy/` directory:
|
||||
- Load balancer configuration
|
||||
- SSL termination
|
||||
- Backend mapping for OpenShift routes
|
||||
|
||||
### Provisioning Scripts
|
||||
|
||||
Custom provisioning scripts in `provision/` directory:
|
||||
- `master.sh`: Master node setup
|
||||
- `node.sh`: Worker node configuration
|
||||
- `extras.sh`: Storage and LDAP setup
|
||||
- `allinone.sh`: All-in-one deployment
|
||||
|
||||
### File Structure
|
||||
|
||||
```
|
||||
okdv3/
|
||||
├── Vagrantfile # Main cluster configuration
|
||||
├── Vagrantfile.allinone # Single-node configuration
|
||||
├── Vagrantfile.full # Full cluster backup
|
||||
├── files/
|
||||
│ ├── hosts # Ansible inventory
|
||||
│ ├── hosts-allinone # Single-node inventory
|
||||
│ ├── key # SSH private key
|
||||
│ ├── key.pub # SSH public key
|
||||
│ └── *.ldif # LDAP configuration
|
||||
├── provision/ # Provisioning scripts
|
||||
├── examples/ # Usage examples
|
||||
│ ├── authentication/ # Auth configuration
|
||||
│ ├── persistent-volumes/ # Storage examples
|
||||
│ ├── registry/ # Container registry
|
||||
│ ├── s2i/ # Source-to-Image
|
||||
│ └── template/ # Application templates
|
||||
└── haproxy/ # Load balancer config
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Contributing
|
||||
|
||||
Feel free to submit issues, feature requests, and pull requests to improve this OKD development environment.
|
||||
|
||||
## License
|
||||
|
||||
This project is provided as-is for educational and development purposes.
|
||||
|
||||
34
Vagrantfile
vendored
Normal file
34
Vagrantfile
vendored
Normal file
@ -0,0 +1,34 @@
|
||||
# -*- mode: ruby -*-
|
||||
# vi: set ft=ruby :
|
||||
|
||||
vms = {
|
||||
'node1' => {'memory' => '4096', 'cpus' => 2, 'ip' => '20', 'host' => 'node1', 'provision' => 'node.sh'},
|
||||
'node2' => {'memory' => '4096', 'cpus' => 2, 'ip' => '30', 'host' => 'node2', 'provision' => 'node.sh'},
|
||||
'extras' => {'memory' => '256', 'cpus' => 1, 'ip' => '40', 'host' => 'extras', 'provision' => 'extras.sh'},
|
||||
'master' => {'memory' => '8192', 'cpus' => 4, 'ip' => '10', 'host' => 'okd', 'provision' => 'master.sh'}
|
||||
}
|
||||
|
||||
Vagrant.configure('2') do |config|
|
||||
|
||||
config.vm.box = 'centos/7'
|
||||
config.vm.box_check_update = false
|
||||
|
||||
vms.each do |name, conf|
|
||||
config.vm.define "#{name}" do |k|
|
||||
k.vm.hostname = "#{conf['host']}.example.com"
|
||||
k.vm.network 'private_network', ip: "172.27.11.#{conf['ip']}"
|
||||
k.vm.provider 'virtualbox' do |vb|
|
||||
vb.memory = conf['memory']
|
||||
vb.cpus = conf['cpus']
|
||||
end
|
||||
k.vm.provider 'libvirt' do |lv|
|
||||
lv.memory = conf['memory']
|
||||
lv.cpus = conf['cpus']
|
||||
lv.cputopology :sockets => 1, :cores => conf['cpus'], :threads => '1'
|
||||
end
|
||||
k.vm.provision 'shell', path: "provision/#{conf['provision']}"
|
||||
end
|
||||
end
|
||||
|
||||
config.vm.provision 'shell', path: 'provision/provision.sh'
|
||||
end
|
||||
32
Vagrantfile.allinone
Normal file
32
Vagrantfile.allinone
Normal file
@ -0,0 +1,32 @@
|
||||
# -*- mode: ruby -*-
|
||||
# vi: set ft=ruby :
|
||||
|
||||
vms = {
|
||||
'extras' => {'memory' => '256', 'cpus' => 1, 'ip' => '40', 'host' => 'extras', 'provision' => 'extras.sh'},
|
||||
'master' => {'memory' => '4096', 'cpus' => 4, 'ip' => '10', 'host' => 'okd', 'provision' => 'allinone.sh'}
|
||||
}
|
||||
|
||||
Vagrant.configure('2') do |config|
|
||||
|
||||
config.vm.box = 'centos/7'
|
||||
config.vm.box_check_update = false
|
||||
|
||||
vms.each do |name, conf|
|
||||
config.vm.define "#{name}" do |k|
|
||||
k.vm.hostname = "#{conf['host']}.example.com"
|
||||
k.vm.network 'private_network', ip: "172.27.11.#{conf['ip']}"
|
||||
k.vm.provider 'virtualbox' do |vb|
|
||||
vb.memory = conf['memory']
|
||||
vb.cpus = conf['cpus']
|
||||
end
|
||||
k.vm.provider 'libvirt' do |lv|
|
||||
lv.memory = conf['memory']
|
||||
lv.cpus = conf['cpus']
|
||||
lv.cputopology :sockets => 1, :cores => conf['cpus'], :threads => '1'
|
||||
end
|
||||
k.vm.provision 'shell', path: "provision/#{conf['provision']}"
|
||||
end
|
||||
end
|
||||
|
||||
config.vm.provision 'shell', path: 'provision/provision.sh'
|
||||
end
|
||||
34
Vagrantfile.full
Normal file
34
Vagrantfile.full
Normal file
@ -0,0 +1,34 @@
|
||||
# -*- mode: ruby -*-
|
||||
# vi: set ft=ruby :
|
||||
|
||||
vms = {
|
||||
'node1' => {'memory' => '2048', 'cpus' => 2, 'ip' => '20', 'host' => 'node1', 'provision' => 'node.sh'},
|
||||
'node2' => {'memory' => '2048', 'cpus' => 2, 'ip' => '30', 'host' => 'node2', 'provision' => 'node.sh'},
|
||||
'extras' => {'memory' => '256', 'cpus' => 1, 'ip' => '40', 'host' => 'extras', 'provision' => 'extras.sh'},
|
||||
'master' => {'memory' => '6144', 'cpus' => 4, 'ip' => '10', 'host' => 'okd', 'provision' => 'master.sh'}
|
||||
}
|
||||
|
||||
Vagrant.configure('2') do |config|
|
||||
|
||||
config.vm.box = 'centos/7'
|
||||
config.vm.box_check_update = false
|
||||
|
||||
vms.each do |name, conf|
|
||||
config.vm.define "#{name}" do |k|
|
||||
k.vm.hostname = "#{conf['host']}.example.com"
|
||||
k.vm.network 'private_network', ip: "172.27.11.#{conf['ip']}"
|
||||
k.vm.provider 'virtualbox' do |vb|
|
||||
vb.memory = conf['memory']
|
||||
vb.cpus = conf['cpus']
|
||||
end
|
||||
k.vm.provider 'libvirt' do |lv|
|
||||
lv.memory = conf['memory']
|
||||
lv.cpus = conf['cpus']
|
||||
lv.cputopology :sockets => 1, :cores => conf['cpus'], :threads => '1'
|
||||
end
|
||||
k.vm.provision 'shell', path: "provision/#{conf['provision']}"
|
||||
end
|
||||
end
|
||||
|
||||
config.vm.provision 'shell', path: 'provision/provision.sh'
|
||||
end
|
||||
47
examples/authentication/README.md
Normal file
47
examples/authentication/README.md
Normal file
@ -0,0 +1,47 @@
|
||||
OKD - Authentication
|
||||
====================
|
||||
|
||||
OKD suport a lot of authentication methods such as LDAP, HTPasswd, Keystone and so on.
|
||||
By default OKD will grant access to any user and any password because **AllowAllPasswordIdentityProvider** is probably enabled.
|
||||
|
||||
HTPasswd
|
||||
--------
|
||||
|
||||
Just to simplify and create the simplest secure authentication, login through ssh in [okd.example.com:8443](okd.example.com:8443) and create an **htpasswd** file with an user and a password:
|
||||
|
||||
htpasswd -bc /etc/origin/master/htpasswd okd pass123
|
||||
|
||||
Verify if the password match the user you choose:
|
||||
|
||||
htpasswd -v /etc/origin/master/htpasswd okd
|
||||
|
||||
Now, go to the **master-config.yml** in */etc/origin/master/master-config.yaml* and change the only **identityProvider**:
|
||||
|
||||
...
|
||||
identityProviders:
|
||||
- challenge: true
|
||||
login: true
|
||||
mappingMethod: claim
|
||||
name: allow_all
|
||||
provider:
|
||||
apiVersion: v1
|
||||
kind: AllowAllPasswordIdentityProvider
|
||||
...
|
||||
- name: htpasswd
|
||||
challenge: true
|
||||
login: true
|
||||
mappingMethod: claim
|
||||
provider:
|
||||
apiVersion: v1
|
||||
kind: HTPasswdPasswordIdentityProvider
|
||||
file: /etc/origin/master/htpasswd
|
||||
|
||||
Restar the master api and controllers to apply the new configuration:
|
||||
|
||||
/usr/local/bin/master-restart api
|
||||
/usr/local/bin/master-restart controllers
|
||||
|
||||
Try login trough the **webconsole** or **cli** and after that you can see the user you created with:
|
||||
|
||||
oc get user
|
||||
oc get identity
|
||||
119
examples/persistent-volumes/README.md
Normal file
119
examples/persistent-volumes/README.md
Normal file
@ -0,0 +1,119 @@
|
||||
OKD - Volumes
|
||||
=============
|
||||
|
||||
Volumes is a way that containers can share data between pods. Persistent Volumes is the way that pods can share volumes between another pods or even clusters.
|
||||
|
||||
Once you had provisioned your OKD cluster, you can go to the **storage** machine and create some NFS mount points:
|
||||
|
||||
```
|
||||
mkdir -p /srv/nfs/v{0,1,2,3,4}
|
||||
chmod 0700 /srv/nfs/v{0,1,2,3,4}
|
||||
chown nfsnobody: /srv/nfs/v{0,1,2,3,4}
|
||||
|
||||
cat > /etc/exports <<EOF
|
||||
/srv/nfs/v0 172.27.11.0/255.255.255.0(rw,all_squash)
|
||||
/srv/nfs/v1 172.27.11.0/255.255.255.0(rw,all_squash)
|
||||
/srv/nfs/v2 172.27.11.0/255.255.255.0(rw,all_squash)
|
||||
/srv/nfs/v3 172.27.11.0/255.255.255.0(rw,all_squash)
|
||||
/srv/nfs/v4 172.27.11.0/255.255.255.0(rw,all_squash)
|
||||
EOF
|
||||
|
||||
exportfs -a
|
||||
systemctl start rpcbind nfs-server
|
||||
systemctl enable rpcbind nfs-server
|
||||
```
|
||||
|
||||
#### Read and Write
|
||||
|
||||
To allow write from a pod to a NFS volume, you need to remove a SELinux protection in every node:
|
||||
|
||||
setsebool -P virt_use_nfs 1
|
||||
|
||||
PersistentVolume
|
||||
----------------
|
||||
|
||||
Once the volumes as exposed from storage server, you can create a PersistentVolume with the NFS type to deploy a volume in your cluster to your developers:
|
||||
|
||||
**nfs-pv.yml**
|
||||
|
||||
```
|
||||
apiVersion: v1
|
||||
kind: PersistentVolume
|
||||
metadata:
|
||||
name: nfs-mysql
|
||||
spec:
|
||||
capacity:
|
||||
storage: 1Gi
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
nfs:
|
||||
server: 172.27.11.40
|
||||
path: "/srv/nfs/v0"
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: PersistentVolume
|
||||
metadata:
|
||||
name: nfs-cache
|
||||
spec:
|
||||
capacity:
|
||||
storage: 512Mi
|
||||
accessModes:
|
||||
- ReadWriteMany
|
||||
nfs:
|
||||
server: 172.27.11.40
|
||||
path: "/srv/nfs/v1"
|
||||
```
|
||||
|
||||
To see these two **PersistentVolumes* execute the following command:
|
||||
|
||||
oc get pv
|
||||
|
||||
PersistentVolumeClaim
|
||||
---------------------
|
||||
|
||||
To request a volume, a developer need to create a **PersistentVolumeClaim** object that matches the PersistentVolumes available in the cluster:
|
||||
|
||||
**cache-pvc.yml**
|
||||
```
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: cache
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteMany
|
||||
storageClassName: ""
|
||||
resources:
|
||||
requests:
|
||||
storage: 512Mi
|
||||
```
|
||||
|
||||
Wait some seconds and list the **PersistentVolumeClaim**. Notice that **nfs-cache** is now attached to it:
|
||||
|
||||
oc get pvc
|
||||
|
||||
Mount the Volume
|
||||
----------------
|
||||
|
||||
For a simple demonstration, you can attach this volume to a simple alpine pod:
|
||||
|
||||
**alpine-pod.yml**
|
||||
```
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: alpine
|
||||
spec:
|
||||
containers:
|
||||
- image: alpine
|
||||
name: alpine
|
||||
tty: true
|
||||
stdin: true
|
||||
volumeMounts:
|
||||
- name: cached-data
|
||||
mountPath: /var/cached-data
|
||||
volumes:
|
||||
- name: cached-data
|
||||
persistentVolumeClaim:
|
||||
claimName: cache
|
||||
```
|
||||
17
examples/persistent-volumes/alpine-pod.yml
Normal file
17
examples/persistent-volumes/alpine-pod.yml
Normal file
@ -0,0 +1,17 @@
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: alpine
|
||||
spec:
|
||||
containers:
|
||||
- image: alpine
|
||||
name: alpine
|
||||
tty: true
|
||||
stdin: true
|
||||
volumeMounts:
|
||||
- name: cached-data
|
||||
mountPath: /var/cached-data
|
||||
volumes:
|
||||
- name: cached-data
|
||||
persistentVolumeClaim:
|
||||
claimName: cache
|
||||
11
examples/persistent-volumes/cache-pvc.yml
Normal file
11
examples/persistent-volumes/cache-pvc.yml
Normal file
@ -0,0 +1,11 @@
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: cache
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteMany
|
||||
storageClassName: ""
|
||||
resources:
|
||||
requests:
|
||||
storage: 512Mi
|
||||
25
examples/persistent-volumes/nfs-pv.yml
Normal file
25
examples/persistent-volumes/nfs-pv.yml
Normal file
@ -0,0 +1,25 @@
|
||||
apiVersion: v1
|
||||
kind: PersistentVolume
|
||||
metadata:
|
||||
name: nfs-mysql
|
||||
spec:
|
||||
capacity:
|
||||
storage: 1Gi
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
nfs:
|
||||
server: 172.27.11.40
|
||||
path: "/srv/nfs/v0"
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: PersistentVolume
|
||||
metadata:
|
||||
name: nfs-cache
|
||||
spec:
|
||||
capacity:
|
||||
storage: 512Mi
|
||||
accessModes:
|
||||
- ReadWriteMany
|
||||
nfs:
|
||||
server: 172.27.11.40
|
||||
path: "/srv/nfs/v1"
|
||||
48
examples/registry/README.md
Normal file
48
examples/registry/README.md
Normal file
@ -0,0 +1,48 @@
|
||||
# Registry
|
||||
|
||||
Para se logar no registry interno do OKD e conseguir subir novas imagens sem precisar torná-las públicas, primeiro é preciso dar permissões elevadas para o usuário em questão e se logar:
|
||||
|
||||
|
||||
```
|
||||
oc login -u system:admin
|
||||
oc adm policy add-cluster-role-to-user cluster-admin user
|
||||
oc login -u user
|
||||
```
|
||||
|
||||
Feito isso, busque pelo serviço chamado **docker-registry**:
|
||||
|
||||
```
|
||||
oc get svc --all-namespaces
|
||||
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
docker-registry ClusterIP 172.30.136.192 <none> 5000/TCP 3d
|
||||
```
|
||||
|
||||
E faça login utilizando o comando **docker** e o token do usuário:
|
||||
|
||||
```
|
||||
docker login -u user -p $(oc whoami -t) 172.30.136.192:5000
|
||||
```
|
||||
|
||||
Podemos utilizar o exemplo de s2i gerado nos tutoriais presentes neste repositório, para isso será preciso gerar uma nova tag para a imagem e então enviá-la para o registry:
|
||||
|
||||
```
|
||||
docker tag lighttpd-centos7 172.30.136.192:5000/openshift/lighttpd-centos7
|
||||
docker push 172.30.136.192:5000/openshift/lighttpd-centos7
|
||||
oc get images | grep lighttpd-centos7
|
||||
```
|
||||
|
||||
Subindo no namespace **openshift** esta imagem ficará disponível para todo os projetos. Mas também é possível subir em qualquer outro namespace.
|
||||
|
||||
### Certificado auto-assinado
|
||||
|
||||
Caso o registry utilize um certificado auto-assinado, você pode adicionar a faixa de IP do OKD dentro da diretiva **insecure-registry** do Docker:
|
||||
|
||||
**/etc/sysconfig/docker**
|
||||
```
|
||||
# /etc/sysconfig/docker
|
||||
|
||||
# Modify these options if you want to change the way the docker daemon runs
|
||||
OPTIONS=' --selinux-enabled --signature-verification=False --insecure-registry=172.30.0.0/16'
|
||||
...
|
||||
```
|
||||
42
examples/s2i/Dockerfile
Normal file
42
examples/s2i/Dockerfile
Normal file
@ -0,0 +1,42 @@
|
||||
# lighttpd-centos7
|
||||
FROM openshift/base-centos7
|
||||
|
||||
# TODO: Put the maintainer name in the image metadata
|
||||
LABEL maintainer="Hector Vido <hector_vido@yahoo.com.br>"
|
||||
|
||||
# TODO: Rename the builder environment variable to inform users about application you provide them
|
||||
ENV LIGHTTPD_VERSION=1.4.53
|
||||
|
||||
# TODO: Set labels used in OpenShift to describe the builder image
|
||||
LABEL io.k8s.description="Platform for serving static HTML files" \
|
||||
io.k8s.display-name="Lighttpd 1.4.53" \
|
||||
io.openshift.expose-services="8080:http" \
|
||||
io.openshift.tags="builder,html,lighttpd"
|
||||
|
||||
# TODO: Install required packages here:
|
||||
# RUN yum install -y ... && yum clean all -y
|
||||
RUN yum install -y epel-release && yum install -y lighttpd && yum clean all -y
|
||||
|
||||
# TODO (optional): Copy the builder files into /opt/app-root
|
||||
# COPY ./<builder_folder>/ /opt/app-root/
|
||||
# Defines the location of the S2I
|
||||
LABEL io.openshift.s2i.scripts-url=image:///usr/libexec/s2i
|
||||
|
||||
# TODO: Copy the S2I scripts to /usr/libexec/s2i, since openshift/base-centos7 image
|
||||
# sets io.openshift.s2i.scripts-url label that way, or update that label
|
||||
COPY ./s2i/bin/ /usr/libexec/s2i
|
||||
|
||||
# Copy the lighttpd configuration file
|
||||
COPY ./etc/ /opt/app-root/etc
|
||||
|
||||
# TODO: Drop the root user and make the content of /opt/app-root owned by user 1001
|
||||
RUN chown -R 1001:1001 /opt/app-root
|
||||
|
||||
# This default user is created in the openshift/base-centos7 image
|
||||
USER 1001
|
||||
|
||||
# TODO: Set the default port for applications built using this image
|
||||
EXPOSE 8080
|
||||
|
||||
# TODO: Set the default CMD for the image
|
||||
CMD ["/usr/libexec/s2i/usage"]
|
||||
218
examples/s2i/README.md
Normal file
218
examples/s2i/README.md
Normal file
@ -0,0 +1,218 @@
|
||||
Source to Image - s2i
|
||||
=====================
|
||||
|
||||
Este tutorial é uma leve modificação de [https://blog.openshift.com/create-s2i-builder-image/](https://blog.openshift.com/create-s2i-builder-image/).
|
||||
|
||||
O s2i é uma ferramenta muito útil para criar imagens construtoras, muito utilizada no **Openshift 3**.
|
||||
A principal vantagem é prevenir que os desenvolvedores utilizem comandos de sistema durante a criação da imagem e fornecer um ambiente padrão de boas práticas para suas aplicações.
|
||||
|
||||
Baixe o binário **s2i** em [https://github.com/openshift/source-to-image/releases/tag/v1.1.14](https://github.com/openshift/source-to-image/releases/tag/v1.1.14) e instale em sua máquina:
|
||||
|
||||
## Primeiro
|
||||
|
||||
```
|
||||
wget https://github.com/openshift/source-to-image/releases/download/v1.1.14/source-to-image-v1.1.14-874754de-linux-amd64.tar.gz
|
||||
tar -xzf source-to-image-v1.1.14-874754de-linux-amd64.tar.gz
|
||||
mv s2i /usr/bin/
|
||||
```
|
||||
## Segundo
|
||||
|
||||
O seguinte comando criará uma pasta chamada **s2i-lighttpd** que ao final criará uma imagem chamada **lighttpd-centos7**:
|
||||
|
||||
```
|
||||
s2i create lighttpd-centos7 s2i-lighttpd
|
||||
```
|
||||
|
||||
O conteúdo do diretório será semelhante ao seguinte:
|
||||
|
||||
```
|
||||
find s2i-lighttpd/
|
||||
|
||||
s2i-lighttpd/
|
||||
s2i-lighttpd/s2i
|
||||
s2i-lighttpd/s2i/bin
|
||||
s2i-lighttpd/s2i/bin/assemble
|
||||
s2i-lighttpd/s2i/bin/run
|
||||
s2i-lighttpd/s2i/bin/usage
|
||||
s2i-lighttpd/s2i/bin/save-artifacts
|
||||
s2i-lighttpd/Dockerfile
|
||||
s2i-lighttpd/README.md
|
||||
s2i-lighttpd/test
|
||||
s2i-lighttpd/test/test-app
|
||||
s2i-lighttpd/test/test-app/index.html
|
||||
s2i-lighttpd/test/run
|
||||
s2i-lighttpd/Makefile
|
||||
```
|
||||
|
||||
## Terceiro
|
||||
|
||||
Modifique o Dockerfile para que fique semelhante ao conteúdo a seguir:
|
||||
|
||||
**Dockerfile**
|
||||
|
||||
```
|
||||
# lighttpd-centos7
|
||||
FROM openshift/base-centos7
|
||||
|
||||
# TODO: Put the maintainer name in the image metadata
|
||||
LABEL maintainer="Hector Vido <hector_vido@yahoo.com.br>"
|
||||
|
||||
# TODO: Rename the builder environment variable to inform users about application you provide them
|
||||
ENV LIGHTTPD_VERSION=1.4.53
|
||||
|
||||
# TODO: Set labels used in OpenShift to describe the builder image
|
||||
LABEL io.k8s.description="Platform for serving static HTML files" \
|
||||
io.k8s.display-name="Lighttpd 1.4.53" \
|
||||
io.openshift.expose-services="8080:http" \
|
||||
io.openshift.tags="builder,html,lighttpd"
|
||||
|
||||
# TODO: Install required packages here:
|
||||
# RUN yum install -y ... && yum clean all -y
|
||||
RUN yum install -y epel-release && yum install -y lighttpd && yum clean all -y
|
||||
|
||||
# TODO (optional): Copy the builder files into /opt/app-root
|
||||
# COPY ./<builder_folder>/ /opt/app-root/
|
||||
# Defines the location of the S2I
|
||||
LABEL io.openshift.s2i.scripts-url=image:///usr/libexec/s2i
|
||||
|
||||
# TODO: Copy the S2I scripts to /usr/libexec/s2i, since openshift/base-centos7 image
|
||||
# sets io.openshift.s2i.scripts-url label that way, or update that label
|
||||
COPY ./s2i/bin/ /usr/libexec/s2i
|
||||
|
||||
# Copy the lighttpd configuration file
|
||||
COPY ./etc/ /opt/app-root/etc
|
||||
|
||||
# TODO: Drop the root user and make the content of /opt/app-root owned by user 1001
|
||||
RUN chown -R 1001:1001 /opt/app-root
|
||||
|
||||
# This default user is created in the openshift/base-centos7 image
|
||||
USER 1001
|
||||
|
||||
# TODO: Set the default port for applications built using this image
|
||||
EXPOSE 8080
|
||||
|
||||
# TODO: Set the default CMD for the image
|
||||
CMD ["/usr/libexec/s2i/usage"]
|
||||
```
|
||||
|
||||
## Quarto
|
||||
|
||||
Modifique o arquivo responsável pela construção da aplicação:
|
||||
|
||||
**s2i/bin/assemble**
|
||||
```
|
||||
#!/bin/bash -e
|
||||
#
|
||||
# S2I assemble script for the 'lighttpd-centos7' image.
|
||||
# The 'assemble' script builds your application source so that it is ready to run.
|
||||
#
|
||||
# For more information refer to the documentation:
|
||||
# https://github.com/openshift/source-to-image/blob/master/docs/builder_image.md
|
||||
#
|
||||
|
||||
# If the 'lighttpd-centos7' assemble script is executed with the '-h' flag, print the usage.
|
||||
if [[ "$1" == "-h" ]]; then
|
||||
exec /usr/libexec/s2i/usage
|
||||
fi
|
||||
|
||||
echo "---> Installing application source..."
|
||||
cp -Rf /tmp/src/. ./
|
||||
```
|
||||
|
||||
## Quinto
|
||||
|
||||
Modifique o arquivo responsável por iniciar a aplicação:
|
||||
|
||||
**s2i/bin/run**
|
||||
```
|
||||
#!/bin/bash -e
|
||||
#
|
||||
# S2I run script for the 'lighttpd-centos7' image.
|
||||
# The run script executes the server that runs your application.
|
||||
#
|
||||
# For more information see the documentation:
|
||||
# https://github.com/openshift/source-to-image/blob/master/docs/builder_image.md
|
||||
#
|
||||
|
||||
exec lighttpd -D -f /opt/app-root/etc/lighttpd.conf
|
||||
```
|
||||
|
||||
## Sexto
|
||||
|
||||
Dentro do arquivo *usage* colocaremos informações de como utilizar a imagem:
|
||||
|
||||
|
||||
**s2i/bin/usage**
|
||||
```
|
||||
#!/bin/bash -e
|
||||
|
||||
cat <<EOF
|
||||
This is the lighttpd-centos7 S2I image:
|
||||
To use it, install S2I: https://github.com/openshift/source-to-image
|
||||
|
||||
Sample invocation:
|
||||
|
||||
s2i build https://github.com/hector-vido/sti-lighttpd.git lighttpd-centos7 lighttpd-ex
|
||||
|
||||
You can then run the resulting image via:
|
||||
docker run -p 8080:8080 lighttpd-ex
|
||||
EOF
|
||||
```
|
||||
|
||||
## Setimo
|
||||
|
||||
Crie uma pasta **etc** e coloque dentro dela o arquivo de configuração do *lighttpd*:
|
||||
|
||||
**etc/lighttpd.conf**
|
||||
```
|
||||
# directory where the documents will be served from
|
||||
server.document-root = "/opt/app-root/src"
|
||||
|
||||
# port the server listens on
|
||||
server.port = 8080
|
||||
|
||||
# default file if none is provided in the URL
|
||||
index-file.names = ( "index.html" )
|
||||
|
||||
# configure specific mimetypes, otherwise application/octet-stream will be used for every file
|
||||
mimetype.assign = (
|
||||
".html" => "text/html",
|
||||
".txt" => "text/plain",
|
||||
".jpg" => "image/jpeg",
|
||||
".png" => "image/png"
|
||||
)
|
||||
```
|
||||
|
||||
## Oitavo
|
||||
|
||||
Feito isso, construa a aplicação com o **make**, que internamente está chamando o comando *docker build*:
|
||||
|
||||
|
||||
```
|
||||
make
|
||||
```
|
||||
|
||||
## Nono
|
||||
|
||||
Para ver a execução do script **usage**, basta rodar a imagem:
|
||||
|
||||
```
|
||||
docker run lighttpd-centos7
|
||||
|
||||
This is the lighttpd-centos7 S2I image:
|
||||
To use it, install S2I: https://github.com/openshift/source-to-image
|
||||
|
||||
Sample invocation:
|
||||
|
||||
s2i build https://github.com/hector-vido/sti-lighttpd.git lighttpd-centos7 lighttpd-ex
|
||||
|
||||
You can then run the resulting image via:
|
||||
docker run -p 8080:8080 lighttpd-ex
|
||||
```
|
||||
|
||||
Como estamos construindo nossa imagem, podemos jogar arquivos HTML dentro do diretório **test/test-app/** e executar o comando *s2i build test/test-app/ lighttpd-centos7 lighttpd-ex*. Mas como o repositório indicado existe, vamos utilizá-lo:
|
||||
|
||||
```
|
||||
s2i build https://github.com/hector-vido/sti-lighttpd.git lighttpd-centos7 lighttpd-ex
|
||||
docker run -p 8080:8080 lighttpd-ex
|
||||
```
|
||||
16
examples/s2i/assemble
Normal file
16
examples/s2i/assemble
Normal file
@ -0,0 +1,16 @@
|
||||
#!/bin/bash -e
|
||||
#
|
||||
# S2I assemble script for the 'lighttpd-centos7' image.
|
||||
# The 'assemble' script builds your application source so that it is ready to run.
|
||||
#
|
||||
# For more information refer to the documentation:
|
||||
# https://github.com/openshift/source-to-image/blob/master/docs/builder_image.md
|
||||
#
|
||||
|
||||
# If the 'lighttpd-centos7' assemble script is executed with the '-h' flag, print the usage.
|
||||
if [[ "$1" == "-h" ]]; then
|
||||
exec /usr/libexec/s2i/usage
|
||||
fi
|
||||
|
||||
echo "---> Installing application source..."
|
||||
cp -Rf /tmp/src/. ./
|
||||
16
examples/s2i/lighttpd.conf
Normal file
16
examples/s2i/lighttpd.conf
Normal file
@ -0,0 +1,16 @@
|
||||
# directory where the documents will be served from
|
||||
server.document-root = "/opt/app-root/src"
|
||||
|
||||
# port the server listens on
|
||||
server.port = 8080
|
||||
|
||||
# default file if none is provided in the URL
|
||||
index-file.names = ( "index.html" )
|
||||
|
||||
# configure specific mimetypes, otherwise application/octet-stream will be used for every file
|
||||
mimetype.assign = (
|
||||
".html" => "text/html",
|
||||
".txt" => "text/plain",
|
||||
".jpg" => "image/jpeg",
|
||||
".png" => "image/png"
|
||||
)
|
||||
10
examples/s2i/run
Normal file
10
examples/s2i/run
Normal file
@ -0,0 +1,10 @@
|
||||
#!/bin/bash -e
|
||||
#
|
||||
# S2I run script for the 'lighttpd-centos7' image.
|
||||
# The run script executes the server that runs your application.
|
||||
#
|
||||
# For more information see the documentation:
|
||||
# https://github.com/openshift/source-to-image/blob/master/docs/builder_image.md
|
||||
#
|
||||
|
||||
exec lighttpd -D -f /opt/app-root/etc/lighttpd.conf
|
||||
12
examples/s2i/usage
Normal file
12
examples/s2i/usage
Normal file
@ -0,0 +1,12 @@
|
||||
#!/bin/bash -e
|
||||
cat <<EOF
|
||||
This is the lighttpd-centos7 S2I image:
|
||||
To use it, install S2I: https://github.com/openshift/source-to-image
|
||||
|
||||
Sample invocation:
|
||||
|
||||
s2i build https://github.com/hector-vido/sti-lighttpd.git lighttpd-centos7 lighttpd-ex
|
||||
|
||||
You can then run the resulting image via:
|
||||
docker run -p 8080:8080 lighttpd-ex
|
||||
EOF
|
||||
25
examples/template/README.md
Normal file
25
examples/template/README.md
Normal file
@ -0,0 +1,25 @@
|
||||
# Templates
|
||||
|
||||
Os templates são um dos recursos mais interessantes do Openshift, pois facilitam o provisionamento de objetos pré-definidos. Podendo ser pods, deployments, serviços, rotas ou todos eles juntos.
|
||||
Dentro de cada template existem vários outros objetos definidos. Isso pode ser facilmente notado pelas várias chaves **kind** que se repetem.
|
||||
|
||||
Mais fácil do que criar um template do zero, é escolher um que se pareça com aquilo que quer e então modificá-lo. Vamos utilizar o template **httpd-example** do catálogo para criar um template para o **lighttpd**:
|
||||
|
||||
```
|
||||
oc get templates -n openshift
|
||||
oc get templates -n openshift httpd-example -o yaml > lighttpd.yml
|
||||
```
|
||||
|
||||
Faça as modificações que achar pertinente e então adicione o novo template no cluster:
|
||||
|
||||
```
|
||||
oc apply -f lighttpd.yml
|
||||
```
|
||||
|
||||
Os ícones utilizados podem ser do próprio **Openshift**:
|
||||
|
||||
[https://rawgit.com/openshift/openshift-logos-icon/master/demo.html](https://rawgit.com/openshift/openshift-logos-icon/master/demo.html)
|
||||
|
||||
Ou do **font awesome** versão 4:
|
||||
|
||||
[https://fontawesome.com/v4.7.0/icons/](https://fontawesome.com/v4.7.0/icons/)
|
||||
210
examples/template/lighttpd.yml
Normal file
210
examples/template/lighttpd.yml
Normal file
@ -0,0 +1,210 @@
|
||||
apiVersion: template.openshift.io/v1
|
||||
kind: Template
|
||||
labels:
|
||||
app: lighttpd-example
|
||||
template: lighttpd-example
|
||||
message: |-
|
||||
The following service(s) have been created in your project: ${NAME}.
|
||||
|
||||
For more information about using this template, including OKD considerations, see https://raw.githubusercontent.com/hector-vido/lighttpd-ex/master/README.md.
|
||||
metadata:
|
||||
annotations:
|
||||
description: An example Lighttpd HTTP Server application that serves static
|
||||
content. For more information about using this template, including OpenShift
|
||||
considerations, see https://raw.githubusercontent.com/hector-vido/lighttpd-ex/master/README.md.
|
||||
iconClass: "fa fa-paper-plane-o"
|
||||
openshift.io/display-name: Lighttpd Server
|
||||
openshift.io/documentation-url: https://github.com/hector-vido/lighttpd-ex
|
||||
openshift.io/long-description: This template defines resources needed to develop
|
||||
a static application served by Lighttpd Server, including a build
|
||||
configuration, application deployment configuration and HPA.
|
||||
openshift.io/provider-display-name: $Linux
|
||||
openshift.io/support-url: https://www.hector-vido.com.br
|
||||
tags: quickstart,lighttpd
|
||||
template.openshift.io/bindable: "false"
|
||||
name: lighttpd-example
|
||||
namespace: openshift
|
||||
objects:
|
||||
- apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
annotations:
|
||||
description: Exposes and load balances the application pods
|
||||
name: ${NAME}
|
||||
spec:
|
||||
ports:
|
||||
- name: web
|
||||
port: 8080
|
||||
targetPort: 8080
|
||||
selector:
|
||||
name: ${NAME}
|
||||
- apiVersion: v1
|
||||
kind: Route
|
||||
metadata:
|
||||
name: ${NAME}
|
||||
spec:
|
||||
host: ${APPLICATION_DOMAIN}
|
||||
to:
|
||||
kind: Service
|
||||
name: ${NAME}
|
||||
- apiVersion: v1
|
||||
kind: ImageStream
|
||||
metadata:
|
||||
annotations:
|
||||
description: Keeps track of changes in the application image
|
||||
name: ${NAME}
|
||||
- apiVersion: v1
|
||||
kind: BuildConfig
|
||||
metadata:
|
||||
annotations:
|
||||
description: Defines how to build the application
|
||||
template.alpha.openshift.io/wait-for-ready: "true"
|
||||
name: ${NAME}
|
||||
spec:
|
||||
output:
|
||||
to:
|
||||
kind: ImageStreamTag
|
||||
name: ${NAME}:latest
|
||||
source:
|
||||
contextDir: ${CONTEXT_DIR}
|
||||
git:
|
||||
ref: ${SOURCE_REPOSITORY_REF}
|
||||
uri: ${SOURCE_REPOSITORY_URL}
|
||||
type: Git
|
||||
strategy:
|
||||
sourceStrategy:
|
||||
from:
|
||||
kind: ImageStreamTag
|
||||
name: lighttpd-centos7:latest
|
||||
namespace: ${NAMESPACE}
|
||||
type: Source
|
||||
triggers:
|
||||
- type: ImageChange
|
||||
- type: ConfigChange
|
||||
- github:
|
||||
secret: ${GITHUB_WEBHOOK_SECRET}
|
||||
type: GitHub
|
||||
- generic:
|
||||
secret: ${GENERIC_WEBHOOK_SECRET}
|
||||
type: Generic
|
||||
- apiVersion: v1
|
||||
kind: DeploymentConfig
|
||||
metadata:
|
||||
annotations:
|
||||
description: Defines how to deploy the application server
|
||||
template.alpha.openshift.io/wait-for-ready: "true"
|
||||
name: ${NAME}
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
name: ${NAME}
|
||||
strategy:
|
||||
type: Rolling
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
name: ${NAME}
|
||||
name: ${NAME}
|
||||
spec:
|
||||
containers:
|
||||
- env: []
|
||||
image: ' '
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /
|
||||
port: 8080
|
||||
initialDelaySeconds: 30
|
||||
timeoutSeconds: 3
|
||||
name: lighttpd
|
||||
ports:
|
||||
- containerPort: 8080
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /
|
||||
port: 8080
|
||||
initialDelaySeconds: 3
|
||||
timeoutSeconds: 3
|
||||
resources:
|
||||
limits:
|
||||
cpu: ${CPU_LIMIT}
|
||||
memory: ${MEMORY_LIMIT}
|
||||
requests:
|
||||
cpu: ${CPU_LIMIT}
|
||||
memory: ${MEMORY_LIMIT}
|
||||
triggers:
|
||||
- imageChangeParams:
|
||||
automatic: true
|
||||
containerNames:
|
||||
- lighttpd
|
||||
from:
|
||||
kind: ImageStreamTag
|
||||
name: ${NAME}:latest
|
||||
type: ImageChange
|
||||
- type: ConfigChange
|
||||
- apiVersion: autoscaling/v1
|
||||
kind: HorizontalPodAutoscaler
|
||||
metadata:
|
||||
creationTimestamp: null
|
||||
name: ${NAME}
|
||||
spec:
|
||||
maxReplicas: 5
|
||||
minReplicas: 1
|
||||
scaleTargetRef:
|
||||
apiVersion: apps.openshift.io/v1
|
||||
kind: DeploymentConfig
|
||||
name: ${NAME}
|
||||
targetCPUUtilizationPercentage: ${CPU_PERCENT}
|
||||
parameters:
|
||||
- description: The name assigned to all of the frontend objects defined in this template.
|
||||
displayName: Name
|
||||
name: NAME
|
||||
required: true
|
||||
value: lighttpd-example
|
||||
- description: The OpenShift Namespace where the ImageStream resides.
|
||||
displayName: Namespace
|
||||
name: NAMESPACE
|
||||
required: true
|
||||
value: openshift
|
||||
- description: Maximum amount of memory the container can use.
|
||||
displayName: Memory Limit
|
||||
name: MEMORY_LIMIT
|
||||
required: true
|
||||
value: 128Mi
|
||||
- description: Maximum % of vcore the container can use.
|
||||
displayName: CPU Limit
|
||||
name: CPU_LIMIT
|
||||
required: true
|
||||
value: 200m
|
||||
- description: Maximum % usage to activate autoscaling.
|
||||
displayName: CPU Percent
|
||||
name: CPU_PERCENT
|
||||
required: true
|
||||
value: "80"
|
||||
- description: The URL of the repository with your application source code.
|
||||
displayName: Git Repository URL
|
||||
name: SOURCE_REPOSITORY_URL
|
||||
required: true
|
||||
value: https://github.com/hector-vido/lighttpd-ex.git
|
||||
- description: Set this to a branch name, tag or other ref of your repository if you
|
||||
are not using the default branch.
|
||||
displayName: Git Reference
|
||||
name: SOURCE_REPOSITORY_REF
|
||||
- description: Set this to the relative path to your project if it is not in the root
|
||||
of your repository.
|
||||
displayName: Context Directory
|
||||
name: CONTEXT_DIR
|
||||
- description: The exposed hostname that will route to the httpd service, if left
|
||||
blank a value will be defaulted.
|
||||
displayName: Application Hostname
|
||||
name: APPLICATION_DOMAIN
|
||||
- description: Github trigger secret. A difficult to guess string encoded as part
|
||||
of the webhook URL. Not encrypted.
|
||||
displayName: GitHub Webhook Secret
|
||||
from: '[a-zA-Z0-9]{40}'
|
||||
generate: expression
|
||||
name: GITHUB_WEBHOOK_SECRET
|
||||
- description: A secret string used to configure the Generic webhook.
|
||||
displayName: Generic Webhook Secret
|
||||
from: '[a-zA-Z0-9]{40}'
|
||||
generate: expression
|
||||
name: GENERIC_WEBHOOK_SECRET
|
||||
43
files/ansible.cfg
Normal file
43
files/ansible.cfg
Normal file
@ -0,0 +1,43 @@
|
||||
# config file for ansible -- http://ansible.com/
|
||||
# ==============================================
|
||||
|
||||
# This config file provides examples for running
|
||||
# the OpenShift playbooks with the provided
|
||||
# inventory scripts.
|
||||
|
||||
[defaults]
|
||||
# Set the log_path
|
||||
log_path = ~/openshift-ansible.log
|
||||
|
||||
# Additional default options for OpenShift Ansible
|
||||
forks = 20
|
||||
host_key_checking = False
|
||||
retry_files_enabled = False
|
||||
retry_files_save_path = ~/ansible-installer-retries
|
||||
nocows = True
|
||||
remote_user = root
|
||||
roles_path = roles/
|
||||
gathering = smart
|
||||
fact_caching = jsonfile
|
||||
fact_caching_connection = $HOME/ansible/facts
|
||||
fact_caching_timeout = 600
|
||||
callback_whitelist = profile_tasks
|
||||
inventory_ignore_extensions = secrets.py, .pyc, .cfg, .crt, .ini
|
||||
# work around privilege escalation timeouts in ansible:
|
||||
timeout = 30
|
||||
|
||||
# Uncomment to use the provided example inventory
|
||||
#inventory = inventory/hosts.example
|
||||
|
||||
[inventory]
|
||||
# fail more helpfully when the inventory file does not parse (Ansible 2.4+)
|
||||
unparsed_is_failed=true
|
||||
|
||||
# Additional ssh options for OpenShift Ansible
|
||||
[ssh_connection]
|
||||
pipelining = True
|
||||
ssh_args = -o ControlMaster=auto -o ControlPersist=600s
|
||||
timeout = 10
|
||||
# shorten the ControlPath which is often too long; when it is,
|
||||
# ssh connection reuse silently fails, making everything slower.
|
||||
control_path = %(directory)s/%%h-%%r
|
||||
13
files/base.ldif
Normal file
13
files/base.ldif
Normal file
@ -0,0 +1,13 @@
|
||||
dn: dc=extras,dc=example,dc=com
|
||||
dc: extras
|
||||
o: Origin Kubernetes Distribution LDAP
|
||||
objectclass: organization
|
||||
objectclass: dcObject
|
||||
|
||||
dn: ou=users,dc=extras,dc=example,dc=com
|
||||
objectClass: organizationalUnit
|
||||
ou: users
|
||||
|
||||
dn: ou=groups,dc=extras,dc=example,dc=com
|
||||
objectClass: organizationalUnit
|
||||
ou: groups
|
||||
14
files/groups.ldif
Normal file
14
files/groups.ldif
Normal file
@ -0,0 +1,14 @@
|
||||
dn: cn=admins,ou=groups,dc=extras,dc=example,dc=com
|
||||
objectClass: top
|
||||
objectClass: posixGroup
|
||||
cn: admins
|
||||
gidNumber: 10000
|
||||
memberUid: ronnie.james
|
||||
|
||||
dn: cn=users,ou=groups,dc=extras,dc=example,dc=com
|
||||
objectClass: top
|
||||
objectClass: posixGroup
|
||||
cn: users
|
||||
gidNumber: 10001
|
||||
memberUid: lou.gramm
|
||||
memberUid: tina.turner
|
||||
30
files/hosts
Normal file
30
files/hosts
Normal file
@ -0,0 +1,30 @@
|
||||
[OSEv3:children]
|
||||
masters
|
||||
nodes
|
||||
etcd
|
||||
|
||||
[OSEv3:vars]
|
||||
ansible_ssh_user=root
|
||||
openshift_enable_olm=false
|
||||
openshift_deployment_type=origin
|
||||
openshift_enable_service_catalog=false
|
||||
openshift_metrics_install_metrics=true
|
||||
openshift_logging_install_logging=false
|
||||
openshift_cluster_monitoring_operator_install=false
|
||||
openshift_master_default_subdomain='172-27-11-10.nip.io'
|
||||
openshift_disable_check='disk_availability,memory_availability,docker_storage,package_availability,docker_image_availability,package_version'
|
||||
openshift_docker_options='--selinux-enabled --log-driver=journald --signature-verification=false --insecure-registry=172.30.0.0/16 --exec-opt native.cgroupdriver=systemd'
|
||||
openshift_master_identity_providers=[{'name': 'HTPASSWD', 'challenge': 'true', 'login': 'true', 'kind':'HTPasswdPasswordIdentityProvider', 'mappingMethod': 'claim'}]
|
||||
openshift_enable_excluders=false
|
||||
openshift_docker_excluder_install=false
|
||||
|
||||
[masters]
|
||||
okd.example.com openshift_public_ip='172.27.11.10' openshift_public_hostname='okd.example.com'
|
||||
|
||||
[etcd]
|
||||
okd.example.com etcd_ip='172.27.11.10'
|
||||
|
||||
[nodes]
|
||||
okd.example.com openshift_node_group_name='node-config-master-infra'
|
||||
node1.example.com openshift_node_group_name='node-config-compute'
|
||||
node2.example.com openshift_node_group_name='node-config-compute'
|
||||
29
files/hosts-allinone
Normal file
29
files/hosts-allinone
Normal file
@ -0,0 +1,29 @@
|
||||
[OSEv3:children]
|
||||
masters
|
||||
nodes
|
||||
etcd
|
||||
|
||||
[OSEv3:vars]
|
||||
ansible_ssh_user=root
|
||||
docker_version="ce"
|
||||
openshift_enable_olm=false
|
||||
openshift_deployment_type=origin
|
||||
openshift_enable_service_catalog=false
|
||||
openshift_metrics_install_metrics=false
|
||||
openshift_logging_install_logging=false
|
||||
openshift_cluster_monitoring_operator_install=false
|
||||
openshift_master_default_subdomain='172-27-11-10.nip.io'
|
||||
openshift_disable_check='disk_availability,memory_availability,docker_storage,package_availability,docker_image_availability,package_version'
|
||||
openshift_docker_options='--selinux-enabled --log-driver=journald --signature-verification=false --insecure-registry=172.30.0.0/16 --exec-opt native.cgroupdriver=systemd'
|
||||
openshift_master_identity_providers=[{'name': 'HTPASSWD', 'challenge': 'true', 'login': 'true', 'kind':'HTPasswdPasswordIdentityProvider', 'mappingMethod': 'claim'}]
|
||||
openshift_enable_excluders=false
|
||||
openshift_docker_excluder_install=false
|
||||
|
||||
[masters]
|
||||
okd.example.com openshift_public_ip='172.27.11.10' openshift_public_hostname='okd.example.com'
|
||||
|
||||
[etcd]
|
||||
okd.example.com etcd_ip='172.27.11.10'
|
||||
|
||||
[nodes]
|
||||
okd.example.com openshift_node_group_name='node-config-all-in-one'
|
||||
27
files/hosts-allinone.backup
Normal file
27
files/hosts-allinone.backup
Normal file
@ -0,0 +1,27 @@
|
||||
[OSEv3:children]
|
||||
masters
|
||||
nodes
|
||||
etcd
|
||||
|
||||
[OSEv3:vars]
|
||||
ansible_ssh_user=root
|
||||
docker_version="ce"
|
||||
openshift_enable_olm=false
|
||||
openshift_deployment_type=origin
|
||||
openshift_enable_service_catalog=false
|
||||
openshift_metrics_install_metrics=false
|
||||
openshift_logging_install_logging=false
|
||||
openshift_cluster_monitoring_operator_install=false
|
||||
openshift_master_default_subdomain='172-27-11-10.nip.io'
|
||||
openshift_disable_check='disk_availability,memory_availability,docker_storage,package_availability'
|
||||
openshift_docker_options='--selinux-enabled --log-driver=journald --signature-verification=false --insecure-registry=172.30.0.0/16 --exec-opt native.cgroupdriver=systemd'
|
||||
openshift_master_identity_providers=[{'name': 'HTPASSWD', 'challenge': 'true', 'login': 'true', 'kind':'HTPasswdPasswordIdentityProvider', 'mappingMethod': 'claim'}]
|
||||
|
||||
[masters]
|
||||
okd.example.com openshift_public_ip='172.27.11.10' openshift_public_hostname='okd.example.com'
|
||||
|
||||
[etcd]
|
||||
okd.example.com etcd_ip='172.27.11.10'
|
||||
|
||||
[nodes]
|
||||
okd.example.com openshift_node_group_name='node-config-all-in-one'
|
||||
27
files/hosts-allinone.backup2
Normal file
27
files/hosts-allinone.backup2
Normal file
@ -0,0 +1,27 @@
|
||||
[OSEv3:children]
|
||||
masters
|
||||
nodes
|
||||
etcd
|
||||
|
||||
[OSEv3:vars]
|
||||
ansible_ssh_user=root
|
||||
docker_version="ce"
|
||||
openshift_enable_olm=false
|
||||
openshift_deployment_type=origin
|
||||
openshift_enable_service_catalog=false
|
||||
openshift_metrics_install_metrics=false
|
||||
openshift_logging_install_logging=false
|
||||
openshift_cluster_monitoring_operator_install=false
|
||||
openshift_master_default_subdomain='172-27-11-10.nip.io'
|
||||
openshift_disable_check='disk_availability,memory_availability,docker_storage,package_availability,docker_image_availability,package_version'
|
||||
openshift_docker_options='--selinux-enabled --log-driver=journald --signature-verification=false --insecure-registry=172.30.0.0/16 --exec-opt native.cgroupdriver=systemd'
|
||||
openshift_master_identity_providers=[{'name': 'HTPASSWD', 'challenge': 'true', 'login': 'true', 'kind':'HTPasswdPasswordIdentityProvider', 'mappingMethod': 'claim'}]
|
||||
|
||||
[masters]
|
||||
okd.example.com openshift_public_ip='172.27.11.10' openshift_public_hostname='okd.example.com'
|
||||
|
||||
[etcd]
|
||||
okd.example.com etcd_ip='172.27.11.10'
|
||||
|
||||
[nodes]
|
||||
okd.example.com openshift_node_group_name='node-config-all-in-one'
|
||||
28
files/hosts.backup
Normal file
28
files/hosts.backup
Normal file
@ -0,0 +1,28 @@
|
||||
[OSEv3:children]
|
||||
masters
|
||||
nodes
|
||||
etcd
|
||||
|
||||
[OSEv3:vars]
|
||||
ansible_ssh_user=root
|
||||
openshift_enable_olm=false
|
||||
openshift_deployment_type=origin
|
||||
openshift_enable_service_catalog=false
|
||||
openshift_metrics_install_metrics=true
|
||||
openshift_logging_install_logging=false
|
||||
openshift_cluster_monitoring_operator_install=false
|
||||
openshift_master_default_subdomain='172-27-11-10.nip.io'
|
||||
openshift_disable_check='disk_availability,memory_availability,docker_storage'
|
||||
openshift_docker_options='--selinux-enabled --log-driver=journald --signature-verification=false --insecure-registry=172.30.0.0/16 --exec-opt native.cgroupdriver=systemd'
|
||||
openshift_master_identity_providers=[{'name': 'HTPASSWD', 'challenge': 'true', 'login': 'true', 'kind':'HTPasswdPasswordIdentityProvider', 'mappingMethod': 'claim'}]
|
||||
|
||||
[masters]
|
||||
okd.example.com openshift_public_ip='172.27.11.10' openshift_public_hostname='okd.example.com'
|
||||
|
||||
[etcd]
|
||||
okd.example.com etcd_ip='172.27.11.10'
|
||||
|
||||
[nodes]
|
||||
okd.example.com openshift_node_group_name='node-config-master-infra'
|
||||
node1.example.com openshift_node_group_name='node-config-compute'
|
||||
node2.example.com openshift_node_group_name='node-config-compute'
|
||||
27
files/key
Normal file
27
files/key
Normal file
@ -0,0 +1,27 @@
|
||||
-----BEGIN RSA PRIVATE KEY-----
|
||||
MIIEowIBAAKCAQEArSF2RtIR3POS3qTuKWXqSGHM3cC+P2CnQjAat/M5hwcBWJ3J
|
||||
925fxC/rPQtkMfILPyiwMv99vN7mxLcrnfmTfeKYiqRrj8oUGIf4CUQTQCc6dPBO
|
||||
mi7S+l5YUxOb9Gq0Cf8IKlzYi0qRXgDP1zj8gR5RpowOCtRclJbSivdK72++GHRH
|
||||
6UlDwgNRSRPS6NU4DsLNj9Gh25zgtGMlZtV8Q4Mybk7/BUvXsdDNHWZpJ8fedeUT
|
||||
WZGllgo/I5IpPKIixxciuCWUCak55bAlaPGjm8ShBZC05nwAb//MC5eWmPyMzt+b
|
||||
oshsqA5tLGtKLtHC8A8J0Rky0QliDyq7XN9/6wIDAQABAoIBAGKGmHjVM7U6KGrs
|
||||
EV0d0qY+ggfwmFQY/RZ9qbblg+eD5RA5O6bD+Vv8qTKkOPDzfdMDpMJhA31onIt2
|
||||
ciwEzBrnyUedKlk59xW+yzj6tLndmTbTSugTnZ0986XTkv0VfD/0EwGItPMQDIoi
|
||||
jCU/GPOh/XV6XsNq9wTYkBjlgo+fZ4m8e5gnRpMLVr+uo8Oy1LDQLMyl5uMintsa
|
||||
8BaEvwYi/khudph3AezdWZoUOG9CmmjBStMzvHanwiiwEnaeFiqjGptBT2+KBeGG
|
||||
IC34PiNGjZP5+uH2/g+TKHjuFMePoyvqKQcznqtNIF+1jS+2NK0YCxXHlk5gwYik
|
||||
p/MpJwECgYEA43A5Jw7NL4fsovMuzBIGwaZbAo6K2AEggkaTsAxIW6/M8hk0YiDw
|
||||
4hpAFIn17sx3aZODKsT25m803XNHDiCaok0ulrTkfNis+mMee18DtmLxLa8K0QaV
|
||||
DRvxdMDETEI1sZumjdfc3IMlgRMsvoYbyOGr5vo0Vuw8al5VwblP1DMCgYEAwt9V
|
||||
x2zALb6+NJMjtm6Jw88OvfUtZRUtLAkI0dDMdS07cAcKldDStMTtel1Pn5Asekws
|
||||
LhF7/4wP95hyPI9XQZjQrdmMk4GkIcc8ifEpQiWTFnbehqlqoSKEWL8xpR/xuUuD
|
||||
JragLvUOLqx6T6iwBCMqoV8q2AJLfEszv9FZrWkCgYEAx7anuRBaRL6KoJwCH9hE
|
||||
bo9xo1EfwoVa0oq+7PwcHcbFpGFVikV6wFBkrKRofIS25tJNf6TtWXOVbE/puRIQ
|
||||
NyynGFdHvAlX+5ZGEfdg/yrqtT7btKifAZ/j6q3KsVwCYi9XlX5Txp6ytCDuTW7d
|
||||
vwvLM0vJ4foXIyArFa1v19kCgYB/DOz4IEcLjBimXmgiQN9A8nZCEt+Nz8irtRgy
|
||||
81bZ7quZ1n1oP8WgZeQOq1eGSJE3CwKi5nNZoQ+n9ZRFN49EDUXAkt28LgG8pBEs
|
||||
PjcQET9cnhNm6H3EoKR41+6eIb2PeVQAoYC+HLcqZvk3hlt71xGsNEfSnWxplP4g
|
||||
SXWWQQKBgAxNSsnJm9GjE1oqGpwXISSR6b50Hsnq6aolgC1B+PNmhCwdRDcJ5E5E
|
||||
vqYsB1dJ6jF2zAV6Hg9tvt9MpuyLXlvcWsdxJZF0frySmdEnrInZLRex6/Gw4UcI
|
||||
zHYY4tJfcoTRXYI7VflrhO6e/5vYFv+AvGqpr+5Dx4zWD8M7G/SN
|
||||
-----END RSA PRIVATE KEY-----
|
||||
1
files/key.pub
Normal file
1
files/key.pub
Normal file
@ -0,0 +1 @@
|
||||
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCtIXZG0hHc85LepO4pZepIYczdwL4/YKdCMBq38zmHBwFYncn3bl/EL+s9C2Qx8gs/KLAy/3283ubEtyud+ZN94piKpGuPyhQYh/gJRBNAJzp08E6aLtL6XlhTE5v0arQJ/wgqXNiLSpFeAM/XOPyBHlGmjA4K1FyUltKK90rvb74YdEfpSUPCA1FJE9Lo1TgOws2P0aHbnOC0YyVm1XxDgzJuTv8FS9ex0M0dZmknx9515RNZkaWWCj8jkik8oiLHFyK4JZQJqTnlsCVo8aObxKEFkLTmfABv/8wLl5aY/IzO35uiyGyoDm0sa0ou0cLwDwnRGTLRCWIPKrtc33/r
|
||||
14
files/ldap.ldif
Normal file
14
files/ldap.ldif
Normal file
@ -0,0 +1,14 @@
|
||||
dn: olcDatabase={2}hdb,cn=config
|
||||
changetype: modify
|
||||
replace: olcSuffix
|
||||
olcSuffix: dc=extras,dc=example,dc=com
|
||||
|
||||
dn: olcDatabase={2}hdb,cn=config
|
||||
changetype: modify
|
||||
replace: olcRootDN
|
||||
olcRootDN: cn=admin,dc=extras,dc=example,dc=com
|
||||
|
||||
dn: olcDatabase={2}hdb,cn=config
|
||||
changetype: modify
|
||||
replace: olcRootPW
|
||||
olcRootPW: {SSHA}DHB1bLFwMkP7VtUM8MAu5NzZunlAeA07
|
||||
47
files/users.ldif
Normal file
47
files/users.ldif
Normal file
@ -0,0 +1,47 @@
|
||||
dn: uid=ronnie.james,ou=users,dc=extras,dc=example,dc=com
|
||||
objectClass: top
|
||||
objectClass: account
|
||||
objectClass: posixAccount
|
||||
objectClass: shadowAccount
|
||||
cn: Ronnie James Dio
|
||||
uid: ronnie.james
|
||||
uidNumber: 10000
|
||||
gidNumber: 10000
|
||||
homeDirectory: /srv/home/ronnie.james
|
||||
loginShell: /bin/bash
|
||||
userPassword: {SSHA}MhndfhVccrnp3Ynam7WhQOp3Eoy/f1YT
|
||||
shadowLastChange: 0
|
||||
shadowMax: 0
|
||||
shadowWarning: 0
|
||||
|
||||
dn: uid=lou.gramm,ou=users,dc=extras,dc=example,dc=com
|
||||
objectClass: top
|
||||
objectClass: account
|
||||
objectClass: posixAccount
|
||||
objectClass: shadowAccount
|
||||
cn: Lou Gramm
|
||||
uid: lou.gramm
|
||||
uidNumber: 10001
|
||||
gidNumber: 10001
|
||||
homeDirectory: /srv/home/hector.vido
|
||||
loginShell: /bin/bash
|
||||
userPassword: {SSHA}T9+m42tBydKkjMPH+X9NrQxY9pzxXcQC
|
||||
shadowLastChange: 0
|
||||
shadowMax: 0
|
||||
shadowWarning: 0
|
||||
|
||||
dn: uid=tina.turner,ou=users,dc=extras,dc=example,dc=com
|
||||
objectClass: top
|
||||
objectClass: account
|
||||
objectClass: posixAccount
|
||||
objectClass: shadowAccount
|
||||
cn: Tina Turner
|
||||
uid: tina.turner
|
||||
uidNumber: 10002
|
||||
gidNumber: 10001
|
||||
homeDirectory: /srv/home/tina.turner
|
||||
loginShell: /bin/bash
|
||||
userPassword: {SSHA}NM0Y0NPj5uus1qbGVFPWuxOx1iDwgYZX
|
||||
shadowLastChange: 0
|
||||
shadowMax: 0
|
||||
shadowWarning: 0
|
||||
236
haproxy/haproxy.config
Normal file
236
haproxy/haproxy.config
Normal file
@ -0,0 +1,236 @@
|
||||
global
|
||||
maxconn 20000
|
||||
|
||||
|
||||
|
||||
daemon
|
||||
ca-base /etc/ssl
|
||||
crt-base /etc/ssl
|
||||
# TODO: Check if we can get reload to be faster by saving server state.
|
||||
# server-state-file /var/lib/haproxy/run/haproxy.state
|
||||
stats socket /var/lib/haproxy/run/haproxy.sock mode 600 level admin expose-fd listeners
|
||||
stats timeout 2m
|
||||
|
||||
# Increase the default request size to be comparable to modern cloud load balancers (ALB: 64kb), affects
|
||||
# total memory use when large numbers of connections are open.
|
||||
tune.maxrewrite 8192
|
||||
tune.bufsize 32768
|
||||
|
||||
# Configure the TLS versions we support
|
||||
ssl-default-bind-options ssl-min-ver TLSv1.0
|
||||
|
||||
# The default cipher suite can be selected from the three sets recommended by https://wiki.mozilla.org/Security/Server_Side_TLS,
|
||||
# or the user can provide one using the ROUTER_CIPHERS environment variable.
|
||||
# By default when a cipher set is not provided, intermediate is used.
|
||||
# Intermediate cipher suite (default) from https://wiki.mozilla.org/Security/Server_Side_TLS
|
||||
tune.ssl.default-dh-param 2048
|
||||
ssl-default-bind-ciphers ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS
|
||||
|
||||
|
||||
defaults
|
||||
maxconn 20000
|
||||
|
||||
# Add x-forwarded-for header.
|
||||
|
||||
# To configure custom default errors, you can either uncomment the
|
||||
# line below (server ... 127.0.0.1:8080) and point it to your custom
|
||||
# backend service or alternatively, you can send a custom 503 error.
|
||||
#
|
||||
# server openshift_backend 127.0.0.1:8080
|
||||
errorfile 503 /var/lib/haproxy/conf/error-page-503.http
|
||||
|
||||
timeout connect 5s
|
||||
timeout client 30s
|
||||
timeout client-fin 1s
|
||||
timeout server 30s
|
||||
timeout server-fin 1s
|
||||
timeout http-request 10s
|
||||
timeout http-keep-alive 300s
|
||||
|
||||
# Long timeout for WebSocket connections.
|
||||
timeout tunnel 1h
|
||||
|
||||
|
||||
|
||||
frontend public
|
||||
|
||||
bind :80
|
||||
mode http
|
||||
tcp-request inspect-delay 5s
|
||||
tcp-request content accept if HTTP
|
||||
monitor-uri /_______internal_router_healthz
|
||||
|
||||
# Strip off Proxy headers to prevent HTTpoxy (https://httpoxy.org/)
|
||||
http-request del-header Proxy
|
||||
|
||||
# DNS labels are case insensitive (RFC 4343), we need to convert the hostname into lowercase
|
||||
# before matching, or any requests containing uppercase characters will never match.
|
||||
http-request set-header Host %[req.hdr(Host),lower]
|
||||
|
||||
# check if we need to redirect/force using https.
|
||||
acl secure_redirect base,map_reg(/var/lib/haproxy/conf/os_route_http_redirect.map) -m found
|
||||
redirect scheme https if secure_redirect
|
||||
|
||||
use_backend %[base,map_reg(/var/lib/haproxy/conf/os_http_be.map)]
|
||||
|
||||
default_backend openshift_default
|
||||
|
||||
# public ssl accepts all connections and isn't checking certificates yet certificates to use will be
|
||||
# determined by the next backend in the chain which may be an app backend (passthrough termination) or a backend
|
||||
# that terminates encryption in this router (edge)
|
||||
frontend public_ssl
|
||||
|
||||
bind :443
|
||||
tcp-request inspect-delay 5s
|
||||
tcp-request content accept if { req_ssl_hello_type 1 }
|
||||
|
||||
# if the connection is SNI and the route is a passthrough don't use the termination backend, just use the tcp backend
|
||||
# for the SNI case, we also need to compare it in case-insensitive mode (by converting it to lowercase) as RFC 4343 says
|
||||
acl sni req.ssl_sni -m found
|
||||
acl sni_passthrough req.ssl_sni,lower,map_reg(/var/lib/haproxy/conf/os_sni_passthrough.map) -m found
|
||||
use_backend %[req.ssl_sni,lower,map_reg(/var/lib/haproxy/conf/os_tcp_be.map)] if sni sni_passthrough
|
||||
|
||||
# if the route is SNI and NOT passthrough enter the termination flow
|
||||
use_backend be_sni if sni
|
||||
|
||||
# non SNI requests should enter a default termination backend rather than the custom cert SNI backend since it
|
||||
# will not be able to match a cert to an SNI host
|
||||
default_backend be_no_sni
|
||||
|
||||
##########################################################################
|
||||
# TLS SNI
|
||||
#
|
||||
# When using SNI we can terminate encryption with custom certificates.
|
||||
# Certs will be stored in a directory and will be matched with the SNI host header
|
||||
# which must exist in the CN of the certificate. Certificates must be concatenated
|
||||
# as a single file (handled by the plugin writer) per the haproxy documentation.
|
||||
#
|
||||
# Finally, check re-encryption settings and re-encrypt or just pass along the unencrypted
|
||||
# traffic
|
||||
##########################################################################
|
||||
backend be_sni
|
||||
server fe_sni 127.0.0.1:10444 weight 1 send-proxy
|
||||
|
||||
frontend fe_sni
|
||||
# terminate ssl on edge
|
||||
bind 127.0.0.1:10444 ssl crt /etc/pki/tls/private/tls.crt crt-list /var/lib/haproxy/conf/cert_config.map accept-proxy
|
||||
mode http
|
||||
|
||||
# Strip off Proxy headers to prevent HTTpoxy (https://httpoxy.org/)
|
||||
http-request del-header Proxy
|
||||
|
||||
# DNS labels are case insensitive (RFC 4343), we need to convert the hostname into lowercase
|
||||
# before matching, or any requests containing uppercase characters will never match.
|
||||
http-request set-header Host %[req.hdr(Host),lower]
|
||||
|
||||
|
||||
|
||||
# map to backend
|
||||
# Search from most specific to general path (host case).
|
||||
# Note: If no match, haproxy uses the default_backend, no other
|
||||
# use_backend directives below this will be processed.
|
||||
use_backend %[base,map_reg(/var/lib/haproxy/conf/os_edge_reencrypt_be.map)]
|
||||
|
||||
default_backend openshift_default
|
||||
|
||||
##########################################################################
|
||||
# END TLS SNI
|
||||
##########################################################################
|
||||
|
||||
##########################################################################
|
||||
# TLS NO SNI
|
||||
#
|
||||
# When we don't have SNI the only thing we can try to do is terminate the encryption
|
||||
# using our wild card certificate. Once that is complete we can either re-encrypt
|
||||
# the traffic or pass it on to the backends
|
||||
##########################################################################
|
||||
# backend for when sni does not exist, or ssl term needs to happen on the edge
|
||||
backend be_no_sni
|
||||
server fe_no_sni 127.0.0.1:10443 weight 1 send-proxy
|
||||
|
||||
frontend fe_no_sni
|
||||
# terminate ssl on edge
|
||||
bind 127.0.0.1:10443 ssl crt /etc/pki/tls/private/tls.crt accept-proxy
|
||||
mode http
|
||||
|
||||
# Strip off Proxy headers to prevent HTTpoxy (https://httpoxy.org/)
|
||||
http-request del-header Proxy
|
||||
|
||||
# DNS labels are case insensitive (RFC 4343), we need to convert the hostname into lowercase
|
||||
# before matching, or any requests containing uppercase characters will never match.
|
||||
http-request set-header Host %[req.hdr(Host),lower]
|
||||
|
||||
|
||||
|
||||
# map to backend
|
||||
# Search from most specific to general path (host case).
|
||||
# Note: If no match, haproxy uses the default_backend, no other
|
||||
# use_backend directives below this will be processed.
|
||||
use_backend %[base,map_reg(/var/lib/haproxy/conf/os_edge_reencrypt_be.map)]
|
||||
|
||||
default_backend openshift_default
|
||||
|
||||
##########################################################################
|
||||
# END TLS NO SNI
|
||||
##########################################################################
|
||||
|
||||
backend openshift_default
|
||||
mode http
|
||||
option forwardfor
|
||||
#option http-keep-alive
|
||||
option http-pretend-keepalive
|
||||
|
||||
##-------------- app level backends ----------------
|
||||
|
||||
|
||||
# Secure backend, pass through
|
||||
backend be_tcp:default:docker-registry
|
||||
balance source
|
||||
|
||||
hash-type consistent
|
||||
timeout check 5000ms
|
||||
server pod:docker-registry-1-vwjlv:docker-registry:10.128.0.21:5000 10.128.0.21:5000 weight 256
|
||||
|
||||
# Secure backend, pass through
|
||||
backend be_tcp:default:registry-console
|
||||
balance source
|
||||
|
||||
hash-type consistent
|
||||
timeout check 5000ms
|
||||
server pod:registry-console-1-wlc9l:registry-console:10.128.0.27:9090 10.128.0.27:9090 weight 256
|
||||
|
||||
# Plain http backend or backend with TLS terminated at the edge or a
|
||||
# secure backend with re-encryption.
|
||||
backend be_secure:openshift-console:console
|
||||
mode http
|
||||
option redispatch
|
||||
option forwardfor
|
||||
balance leastconn
|
||||
|
||||
timeout check 5000ms
|
||||
http-request set-header X-Forwarded-Host %[req.hdr(host)]
|
||||
http-request set-header X-Forwarded-Port %[dst_port]
|
||||
http-request set-header X-Forwarded-Proto http if !{ ssl_fc }
|
||||
http-request set-header X-Forwarded-Proto https if { ssl_fc }
|
||||
http-request set-header X-Forwarded-Proto-Version h2 if { ssl_fc_alpn -i h2 }
|
||||
http-request add-header Forwarded for=%[src];host=%[req.hdr(host)];proto=%[req.hdr(X-Forwarded-Proto)];proto-version=%[req.hdr(X-Forwarded-Proto-Version)]
|
||||
cookie 1e2670d92730b515ce3a1bb65da45062 insert indirect nocache httponly secure attr SameSite=None
|
||||
server pod:console-75ff54865-bxf7m:console:10.128.0.22:8443 10.128.0.22:8443 cookie 7975a71592eb59717a53657aad37ba28 weight 256 ssl verifyhost console.openshift-console.svc verify required ca-file /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
|
||||
|
||||
# Plain http backend or backend with TLS terminated at the edge or a
|
||||
# secure backend with re-encryption.
|
||||
backend be_secure:openshift-infra:hawkular-metrics
|
||||
mode http
|
||||
option redispatch
|
||||
option forwardfor
|
||||
balance leastconn
|
||||
|
||||
timeout check 5000ms
|
||||
http-request set-header X-Forwarded-Host %[req.hdr(host)]
|
||||
http-request set-header X-Forwarded-Port %[dst_port]
|
||||
http-request set-header X-Forwarded-Proto http if !{ ssl_fc }
|
||||
http-request set-header X-Forwarded-Proto https if { ssl_fc }
|
||||
http-request set-header X-Forwarded-Proto-Version h2 if { ssl_fc_alpn -i h2 }
|
||||
http-request add-header Forwarded for=%[src];host=%[req.hdr(host)];proto=%[req.hdr(X-Forwarded-Proto)];proto-version=%[req.hdr(X-Forwarded-Proto-Version)]
|
||||
cookie a054b5d9e987bf679f10c9d29be39478 insert indirect nocache httponly secure attr SameSite=None
|
||||
server pod:hawkular-metrics-rp6cn:hawkular-metrics:10.128.0.26:8443 10.128.0.26:8443 cookie bb9702a210555545797c318f9112d112 weight 256 ssl verify required ca-file /var/lib/haproxy/router/cacerts/openshift-infra:hawkular-metrics.pem
|
||||
4
haproxy/os_tcp_be.map
Normal file
4
haproxy/os_tcp_be.map
Normal file
@ -0,0 +1,4 @@
|
||||
^registry-console-default\.172-27-11-10\.nip\.io(:[0-9]+)?(/.*)?$ be_tcp:default:registry-console
|
||||
^hawkular-metrics\.172-27-11-10\.nip\.io(:[0-9]+)?(/.*)?$ be_secure:openshift-infra:hawkular-metrics
|
||||
^docker-registry-default\.172-27-11-10\.nip\.io(:[0-9]+)?(/.*)?$ be_tcp:default:docker-registry
|
||||
^console\.172-27-11-10\.nip\.io(:[0-9]+)?(/.*)?$ be_secure:openshift-console:console
|
||||
0
haproxy/os_wildcard_domain.map
Normal file
0
haproxy/os_wildcard_domain.map
Normal file
42
provision/allinone.sh
Normal file
42
provision/allinone.sh
Normal file
@ -0,0 +1,42 @@
|
||||
#!/bin/bash
|
||||
|
||||
/vagrant/provision/fix-centos-repos.sh
|
||||
/vagrant/provision/fix-openshift-repos.sh
|
||||
|
||||
# Dependências
|
||||
yum install -y curl vim device-mapper-persistent-data lvm2 epel-release wget git net-tools bind-utils yum-utils iptables-services bridge-utils bash-completion kexec-tools sos psacct
|
||||
|
||||
# Install and start Docker
|
||||
yum install -y docker
|
||||
systemctl start docker
|
||||
systemctl enable docker
|
||||
systemctl status docker
|
||||
|
||||
#docker pull docker.io/openshift/origin-pod:v3.11
|
||||
#docker pull docker.io/openshift/origin-node:v3.11
|
||||
#docker pull docker.io/openshift/origin-docker-builder:v3.11.0
|
||||
#docker pull docker.io/openshift/origin-deployer:v3.11
|
||||
#docker pull docker.io/openshift/origin-haproxy-router:v3.11
|
||||
#docker pull docker.io/cockpit/kubernetes
|
||||
#docker pull docker.io/openshift/origin-docker-registry:v3.11
|
||||
#docker pull docker.io/openshift/origin-control-plane:v3.11
|
||||
#docker pull quay.io/coreos/etcd:v3.2.22
|
||||
|
||||
yum install -y container-selinux libsemanage-python httpd-tools java python-passlib pyOpenSSL PyYAML python-jinja2 python-paramiko python-setuptools python2-cryptography sshpass
|
||||
rpm -i https://releases.ansible.com/ansible/rpm/release/epel-7-x86_64/ansible-2.5.7-1.el7.ans.noarch.rpm
|
||||
cp /vagrant/files/hosts-allinone /etc/ansible/hosts
|
||||
cp /vagrant/files/ansible.cfg /etc/ansible/ansible.cfg
|
||||
cp /vagrant/files/key /root/.ssh/id_rsa; chmod 400 /root/.ssh/id_rsa
|
||||
cp /vagrant/files/key.pub /root/.ssh/id_rsa.pub
|
||||
sed -i -e "s/#host_key_checking/host_key_checking/" /etc/ansible/ansible.cfg
|
||||
sed -i -e "s@#private_key_file = /path/to/file@private_key_file = /root/.ssh/id_rsa@" /etc/ansible/ansible.cfg
|
||||
|
||||
git clone -b release-3.11 --single-branch https://github.com/openshift/openshift-ansible /root/openshift-ansible
|
||||
cd /root/openshift-ansible
|
||||
sed -i 's/openshift.common.ip/openshift.common.public_ip/' roles/openshift_control_plane/templates/master.yaml.v1.j2
|
||||
|
||||
ansible-playbook /root/openshift-ansible/playbooks/prerequisites.yml
|
||||
/vagrant/provision/fix-openshift-repos.sh
|
||||
ansible-playbook /root/openshift-ansible/playbooks/deploy_cluster.yml
|
||||
|
||||
mkdir -p /etc/origin/master && htpasswd -Bbc /etc/origin/master/htpasswd developer 4linux
|
||||
32
provision/allinone.sh.backup
Normal file
32
provision/allinone.sh.backup
Normal file
@ -0,0 +1,32 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Dependências
|
||||
yum install -y curl vim device-mapper-persistent-data lvm2 epel-release wget git net-tools bind-utils yum-utils iptables-services bridge-utils bash-completion kexec-tools sos psacct
|
||||
|
||||
#docker pull docker.io/openshift/origin-pod:v3.11
|
||||
#docker pull docker.io/openshift/origin-node:v3.11
|
||||
#docker pull docker.io/openshift/origin-docker-builder:v3.11.0
|
||||
#docker pull docker.io/openshift/origin-deployer:v3.11
|
||||
#docker pull docker.io/openshift/origin-haproxy-router:v3.11
|
||||
#docker pull docker.io/cockpit/kubernetes
|
||||
#docker pull docker.io/openshift/origin-docker-registry:v3.11
|
||||
#docker pull docker.io/openshift/origin-control-plane:v3.11
|
||||
#docker pull quay.io/coreos/etcd:v3.2.22
|
||||
|
||||
yum install -y java python-passlib pyOpenSSL PyYAML python-jinja2 python-paramiko python-setuptools python2-cryptography sshpass
|
||||
rpm -i https://releases.ansible.com/ansible/rpm/release/epel-7-x86_64/ansible-2.5.7-1.el7.ans.noarch.rpm
|
||||
cp /vagrant/files/hosts-allinone /etc/ansible/hosts
|
||||
cp /vagrant/files/ansible.cfg /etc/ansible/ansible.cfg
|
||||
cp /vagrant/files/key /root/.ssh/id_rsa; chmod 400 /root/.ssh/id_rsa
|
||||
cp /vagrant/files/key.pub /root/.ssh/id_rsa.pub
|
||||
sed -i -e "s/#host_key_checking/host_key_checking/" /etc/ansible/ansible.cfg
|
||||
sed -i -e "s@#private_key_file = /path/to/file@private_key_file = /root/.ssh/id_rsa@" /etc/ansible/ansible.cfg
|
||||
|
||||
git clone -b release-3.11 --single-branch https://github.com/openshift/openshift-ansible /root/openshift-ansible
|
||||
cd /root/openshift-ansible
|
||||
sed -i 's/openshift.common.ip/openshift.common.public_ip/' roles/openshift_control_plane/templates/master.yaml.v1.j2
|
||||
|
||||
ansible-playbook /root/openshift-ansible/playbooks/prerequisites.yml
|
||||
ansible-playbook /root/openshift-ansible/playbooks/deploy_cluster.yml
|
||||
|
||||
htpasswd -Bbc /etc/origin/master/htpasswd developer 4linux
|
||||
35
provision/allinone.sh.backup3
Normal file
35
provision/allinone.sh.backup3
Normal file
@ -0,0 +1,35 @@
|
||||
#!/bin/bash
|
||||
|
||||
/vagrant/provision/fix-centos-repos.sh
|
||||
/vagrant/provision/fix-openshift-repos.sh
|
||||
|
||||
# Dependências
|
||||
yum install -y curl vim device-mapper-persistent-data lvm2 epel-release wget git net-tools bind-utils yum-utils iptables-services bridge-utils bash-completion kexec-tools sos psacct
|
||||
|
||||
#docker pull docker.io/openshift/origin-pod:v3.11
|
||||
#docker pull docker.io/openshift/origin-node:v3.11
|
||||
#docker pull docker.io/openshift/origin-docker-builder:v3.11.0
|
||||
#docker pull docker.io/openshift/origin-deployer:v3.11
|
||||
#docker pull docker.io/openshift/origin-haproxy-router:v3.11
|
||||
#docker pull docker.io/cockpit/kubernetes
|
||||
#docker pull docker.io/openshift/origin-docker-registry:v3.11
|
||||
#docker pull docker.io/openshift/origin-control-plane:v3.11
|
||||
#docker pull quay.io/coreos/etcd:v3.2.22
|
||||
|
||||
yum install -y httpd-tools java python-passlib pyOpenSSL PyYAML python-jinja2 python-paramiko python-setuptools python2-cryptography sshpass
|
||||
rpm -i https://releases.ansible.com/ansible/rpm/release/epel-7-x86_64/ansible-2.5.7-1.el7.ans.noarch.rpm
|
||||
cp /vagrant/files/hosts-allinone /etc/ansible/hosts
|
||||
cp /vagrant/files/ansible.cfg /etc/ansible/ansible.cfg
|
||||
cp /vagrant/files/key /root/.ssh/id_rsa; chmod 400 /root/.ssh/id_rsa
|
||||
cp /vagrant/files/key.pub /root/.ssh/id_rsa.pub
|
||||
sed -i -e "s/#host_key_checking/host_key_checking/" /etc/ansible/ansible.cfg
|
||||
sed -i -e "s@#private_key_file = /path/to/file@private_key_file = /root/.ssh/id_rsa@" /etc/ansible/ansible.cfg
|
||||
|
||||
git clone -b release-3.11 --single-branch https://github.com/openshift/openshift-ansible /root/openshift-ansible
|
||||
cd /root/openshift-ansible
|
||||
sed -i 's/openshift.common.ip/openshift.common.public_ip/' roles/openshift_control_plane/templates/master.yaml.v1.j2
|
||||
|
||||
ansible-playbook /root/openshift-ansible/playbooks/prerequisites.yml
|
||||
ansible-playbook /root/openshift-ansible/playbooks/deploy_cluster.yml
|
||||
|
||||
mkdir -p /etc/origin/master && htpasswd -Bbc /etc/origin/master/htpasswd developer 4linux
|
||||
34
provision/allinone.sh.fixed
Normal file
34
provision/allinone.sh.fixed
Normal file
@ -0,0 +1,34 @@
|
||||
#!/bin/bash
|
||||
|
||||
/vagrant/provision/fix-centos-repos.sh
|
||||
|
||||
# Dependências
|
||||
yum install -y curl vim device-mapper-persistent-data lvm2 epel-release wget git net-tools bind-utils yum-utils iptables-services bridge-utils bash-completion kexec-tools sos psacct
|
||||
|
||||
#docker pull docker.io/openshift/origin-pod:v3.11
|
||||
#docker pull docker.io/openshift/origin-node:v3.11
|
||||
#docker pull docker.io/openshift/origin-docker-builder:v3.11.0
|
||||
#docker pull docker.io/openshift/origin-deployer:v3.11
|
||||
#docker pull docker.io/openshift/origin-haproxy-router:v3.11
|
||||
#docker pull docker.io/cockpit/kubernetes
|
||||
#docker pull docker.io/openshift/origin-docker-registry:v3.11
|
||||
#docker pull docker.io/openshift/origin-control-plane:v3.11
|
||||
#docker pull quay.io/coreos/etcd:v3.2.22
|
||||
|
||||
yum install -y java python-passlib pyOpenSSL PyYAML python-jinja2 python-paramiko python-setuptools python2-cryptography sshpass
|
||||
rpm -i https://releases.ansible.com/ansible/rpm/release/epel-7-x86_64/ansible-2.5.7-1.el7.ans.noarch.rpm
|
||||
cp /vagrant/files/hosts-allinone /etc/ansible/hosts
|
||||
cp /vagrant/files/ansible.cfg /etc/ansible/ansible.cfg
|
||||
cp /vagrant/files/key /root/.ssh/id_rsa; chmod 400 /root/.ssh/id_rsa
|
||||
cp /vagrant/files/key.pub /root/.ssh/id_rsa.pub
|
||||
sed -i -e "s/#host_key_checking/host_key_checking/" /etc/ansible/ansible.cfg
|
||||
sed -i -e "s@#private_key_file = /path/to/file@private_key_file = /root/.ssh/id_rsa@" /etc/ansible/ansible.cfg
|
||||
|
||||
git clone -b release-3.11 --single-branch https://github.com/openshift/openshift-ansible /root/openshift-ansible
|
||||
cd /root/openshift-ansible
|
||||
sed -i 's/openshift.common.ip/openshift.common.public_ip/' roles/openshift_control_plane/templates/master.yaml.v1.j2
|
||||
|
||||
ansible-playbook /root/openshift-ansible/playbooks/prerequisites.yml
|
||||
ansible-playbook /root/openshift-ansible/playbooks/deploy_cluster.yml
|
||||
|
||||
htpasswd -Bbc /etc/origin/master/htpasswd developer 4linux
|
||||
33
provision/extras.sh
Normal file
33
provision/extras.sh
Normal file
@ -0,0 +1,33 @@
|
||||
#!/bin/bash
|
||||
|
||||
/vagrant/provision/fix-centos-repos.sh
|
||||
|
||||
yum -y install vim openldap-servers openldap-clients
|
||||
|
||||
# LDAP
|
||||
systemctl enable slapd
|
||||
systemctl start slapd
|
||||
|
||||
ldapadd -Y EXTERNAL -H ldapi:/// -f /etc/openldap/schema/cosine.ldif
|
||||
ldapadd -Y EXTERNAL -H ldapi:/// -f /etc/openldap/schema/nis.ldif
|
||||
ldapadd -Y EXTERNAL -H ldapi:/// -f /etc/openldap/schema/inetorgperson.ldif
|
||||
|
||||
ldapmodify -Y EXTERNAL -H ldapi:/// -f /vagrant/files/ldap.ldif
|
||||
ldapadd -h 'localhost' -D 'cn=admin,dc=extras,dc=example,dc=com' -w 'okdldap' -f /vagrant/files/base.ldif
|
||||
ldapadd -h 'localhost' -D 'cn=admin,dc=extras,dc=example,dc=com' -w 'okdldap' -f /vagrant/files/users.ldif
|
||||
ldapadd -h 'localhost' -D 'cn=admin,dc=extras,dc=example,dc=com' -w 'okdldap' -f /vagrant/files/groups.ldif
|
||||
|
||||
# NFS
|
||||
> /etc/exports
|
||||
|
||||
for X in $(seq 0 9); do
|
||||
mkdir -p /srv/nfs/v$X
|
||||
echo "/srv/nfs/v$X 172.27.11.0/24(rw,all_squash)" >> /etc/exports
|
||||
done
|
||||
|
||||
chmod 0700 /srv/nfs/v*
|
||||
chown nfsnobody: /srv/nfs/v*
|
||||
|
||||
exportfs -a
|
||||
systemctl start rpcbind nfs-server
|
||||
systemctl enable rpcbind nfs-server
|
||||
31
provision/extras.sh.backup
Normal file
31
provision/extras.sh.backup
Normal file
@ -0,0 +1,31 @@
|
||||
#!/bin/bash
|
||||
|
||||
yum -y install vim openldap-servers openldap-clients
|
||||
|
||||
# LDAP
|
||||
systemctl enable slapd
|
||||
systemctl start slapd
|
||||
|
||||
ldapadd -Y EXTERNAL -H ldapi:/// -f /etc/openldap/schema/cosine.ldif
|
||||
ldapadd -Y EXTERNAL -H ldapi:/// -f /etc/openldap/schema/nis.ldif
|
||||
ldapadd -Y EXTERNAL -H ldapi:/// -f /etc/openldap/schema/inetorgperson.ldif
|
||||
|
||||
ldapmodify -Y EXTERNAL -H ldapi:/// -f /vagrant/files/ldap.ldif
|
||||
ldapadd -h 'localhost' -D 'cn=admin,dc=extras,dc=example,dc=com' -w 'okdldap' -f /vagrant/files/base.ldif
|
||||
ldapadd -h 'localhost' -D 'cn=admin,dc=extras,dc=example,dc=com' -w 'okdldap' -f /vagrant/files/users.ldif
|
||||
ldapadd -h 'localhost' -D 'cn=admin,dc=extras,dc=example,dc=com' -w 'okdldap' -f /vagrant/files/groups.ldif
|
||||
|
||||
# NFS
|
||||
> /etc/exports
|
||||
|
||||
for X in $(seq 0 9); do
|
||||
mkdir -p /srv/nfs/v$X
|
||||
echo "/srv/nfs/v$X 172.27.11.0/24(rw,all_squash)" >> /etc/exports
|
||||
done
|
||||
|
||||
chmod 0700 /srv/nfs/v*
|
||||
chown nfsnobody: /srv/nfs/v*
|
||||
|
||||
exportfs -a
|
||||
systemctl start rpcbind nfs-server
|
||||
systemctl enable rpcbind nfs-server
|
||||
42
provision/fix-centos-repos.sh
Executable file
42
provision/fix-centos-repos.sh
Executable file
@ -0,0 +1,42 @@
|
||||
#!/bin/bash
|
||||
# Fix CentOS 7 repository URLs to use vault.centos.org
|
||||
|
||||
echo "Fixing CentOS 7 repository URLs..."
|
||||
|
||||
# Backup original repo files
|
||||
cp /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup 2>/dev/null || true
|
||||
|
||||
# Create new CentOS-Base.repo pointing to vault.centos.org
|
||||
cat > /etc/yum.repos.d/CentOS-Base.repo << 'REPO_EOF'
|
||||
[base]
|
||||
name=CentOS-7 - Base
|
||||
baseurl=http://vault.centos.org/7.9.2009/os/x86_64/
|
||||
gpgcheck=1
|
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
|
||||
enabled=1
|
||||
|
||||
[updates]
|
||||
name=CentOS-7 - Updates
|
||||
baseurl=http://vault.centos.org/7.9.2009/updates/x86_64/
|
||||
gpgcheck=1
|
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
|
||||
enabled=1
|
||||
|
||||
[extras]
|
||||
name=CentOS-7 - Extras
|
||||
baseurl=http://vault.centos.org/7.9.2009/extras/x86_64/
|
||||
gpgcheck=1
|
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
|
||||
enabled=1
|
||||
|
||||
[centosplus]
|
||||
name=CentOS-7 - Plus
|
||||
baseurl=http://vault.centos.org/7.9.2009/centosplus/x86_64/
|
||||
gpgcheck=1
|
||||
enabled=0
|
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
|
||||
REPO_EOF
|
||||
|
||||
# Clean yum cache
|
||||
yum clean all
|
||||
yum makecache
|
||||
45
provision/fix-openshift-repos.sh
Executable file
45
provision/fix-openshift-repos.sh
Executable file
@ -0,0 +1,45 @@
|
||||
#!/bin/bash
|
||||
# Comprehensive fix for OpenShift Origin repository issues
|
||||
|
||||
echo "Fixing OpenShift Origin repository URLs..."
|
||||
|
||||
# Fix the main CentOS-OpenShift-Origin311.repo file to use vault.centos.org
|
||||
cat > /etc/yum.repos.d/CentOS-OpenShift-Origin311.repo << 'REPO_EOF'
|
||||
[centos-openshift-origin311]
|
||||
name=CentOS OpenShift Origin
|
||||
baseurl=http://vault.centos.org/centos/7/paas/x86_64/openshift-origin311/
|
||||
enabled=1
|
||||
gpgcheck=1
|
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-PaaS
|
||||
|
||||
[centos-openshift-origin311-testing]
|
||||
name=CentOS OpenShift Origin Testing
|
||||
baseurl=http://vault.centos.org/centos/7/paas/x86_64/openshift-origin311/
|
||||
enabled=0
|
||||
gpgcheck=0
|
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-PaaS
|
||||
|
||||
[centos-openshift-origin311-debuginfo]
|
||||
name=CentOS OpenShift Origin DebugInfo
|
||||
baseurl=http://vault.centos.org/centos/7/paas/x86_64/
|
||||
enabled=0
|
||||
gpgcheck=1
|
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-PaaS
|
||||
|
||||
[centos-openshift-origin311-source]
|
||||
name=CentOS OpenShift Origin Source
|
||||
baseurl=http://vault.centos.org/centos/7/paas/Source/openshift-origin311/
|
||||
enabled=0
|
||||
gpgcheck=1
|
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-PaaS
|
||||
REPO_EOF
|
||||
|
||||
# Also handle any other variants
|
||||
if [ -f /etc/yum.repos.d/centos-openshift-origin311.repo ]; then
|
||||
rm -f /etc/yum.repos.d/centos-openshift-origin311.repo
|
||||
fi
|
||||
|
||||
# Clean yum cache to remove any cached broken repo data
|
||||
yum clean all
|
||||
|
||||
echo "OpenShift Origin repository URLs fixed to use vault.centos.org"
|
||||
57
provision/fix-repos-persistent.sh
Executable file
57
provision/fix-repos-persistent.sh
Executable file
@ -0,0 +1,57 @@
|
||||
#!/bin/bash
|
||||
# Persistent repository fix that runs continuously
|
||||
|
||||
echo "Setting up persistent repository fix..."
|
||||
|
||||
# Create a script that will fix repos whenever they're created
|
||||
cat > /usr/local/bin/fix-openshift-repos-monitor.sh << 'MONITOR_EOF'
|
||||
#!/bin/bash
|
||||
while true; do
|
||||
if [ -f /etc/yum.repos.d/CentOS-OpenShift-Origin311.repo ]; then
|
||||
if grep -q "mirror.centos.org" /etc/yum.repos.d/CentOS-OpenShift-Origin311.repo; then
|
||||
echo "$(date): Fixing OpenShift Origin repository URLs..."
|
||||
|
||||
cat > /etc/yum.repos.d/CentOS-OpenShift-Origin311.repo << 'REPO_EOF'
|
||||
[centos-openshift-origin311]
|
||||
name=CentOS OpenShift Origin
|
||||
baseurl=http://vault.centos.org/centos/7/paas/x86_64/openshift-origin311/
|
||||
enabled=1
|
||||
gpgcheck=1
|
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-PaaS
|
||||
|
||||
[centos-openshift-origin311-testing]
|
||||
name=CentOS OpenShift Origin Testing
|
||||
baseurl=http://vault.centos.org/centos/7/paas/x86_64/openshift-origin311/
|
||||
enabled=0
|
||||
gpgcheck=0
|
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-PaaS
|
||||
|
||||
[centos-openshift-origin311-debuginfo]
|
||||
name=CentOS OpenShift Origin DebugInfo
|
||||
baseurl=http://vault.centos.org/centos/7/paas/x86_64/
|
||||
enabled=0
|
||||
gpgcheck=1
|
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-PaaS
|
||||
|
||||
[centos-openshift-origin311-source]
|
||||
name=CentOS OpenShift Origin Source
|
||||
baseurl=http://vault.centos.org/centos/7/paas/Source/openshift-origin311/
|
||||
enabled=0
|
||||
gpgcheck=1
|
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-PaaS
|
||||
REPO_EOF
|
||||
|
||||
yum clean all
|
||||
echo "$(date): Repository URLs fixed to use vault.centos.org"
|
||||
fi
|
||||
fi
|
||||
sleep 5
|
||||
done
|
||||
MONITOR_EOF
|
||||
|
||||
chmod +x /usr/local/bin/fix-openshift-repos-monitor.sh
|
||||
|
||||
# Start the monitor in background
|
||||
nohup /usr/local/bin/fix-openshift-repos-monitor.sh > /var/log/repo-fix.log 2>&1 &
|
||||
|
||||
echo "Persistent repository fix started"
|
||||
36
provision/master.sh
Normal file
36
provision/master.sh
Normal file
@ -0,0 +1,36 @@
|
||||
#!/bin/bash
|
||||
|
||||
/vagrant/provision/fix-centos-repos.sh
|
||||
/vagrant/provision/fix-repos-persistent.sh
|
||||
|
||||
# Dependências
|
||||
yum install -y curl vim device-mapper-persistent-data lvm2 epel-release wget git net-tools bind-utils yum-utils iptables-services bridge-utils bash-completion kexec-tools sos psacct docker python-ipaddress PyYAML
|
||||
|
||||
systemctl start docker
|
||||
systemctl enable docker
|
||||
|
||||
for IMAGE in 'origin-node:v3.11' 'origin-pod:v3.11'; do
|
||||
docker pull "quay.io/openshift/$IMAGE"
|
||||
for IP in 20 30; do
|
||||
docker save "quay.io/openshift/$IMAGE" | ssh -o stricthostkeychecking=no root@172.27.11.$IP docker load
|
||||
done
|
||||
done
|
||||
|
||||
yum install -y container-selinux libsemanage-python httpd-tools java python-passlib pyOpenSSL python-jinja2 python-paramiko python-setuptools python2-cryptography sshpass
|
||||
rpm -i https://releases.ansible.com/ansible/rpm/release/epel-7-x86_64/ansible-2.6.2-1.el7.ans.noarch.rpm
|
||||
cp /vagrant/files/hosts /etc/ansible/hosts
|
||||
cp /vagrant/files/ansible.cfg /etc/ansible/ansible.cfg
|
||||
cp /vagrant/files/key /root/.ssh/id_rsa; chmod 400 /root/.ssh/id_rsa
|
||||
cp /vagrant/files/key.pub /root/.ssh/id_rsa.pub
|
||||
sed -i -e "s/#host_key_checking/host_key_checking/" /etc/ansible/ansible.cfg
|
||||
sed -i -e "s@#private_key_file = /path/to/file@private_key_file = /root/.ssh/id_rsa@" /etc/ansible/ansible.cfg
|
||||
|
||||
git clone -b release-3.11 --single-branch https://github.com/openshift/openshift-ansible /root/openshift-ansible
|
||||
cd /root/openshift-ansible
|
||||
sed -i 's/openshift.common.ip/openshift.common.public_ip/' roles/openshift_control_plane/templates/master.yaml.v1.j2
|
||||
|
||||
ansible-playbook /root/openshift-ansible/playbooks/prerequisites.yml
|
||||
/vagrant/provision/fix-repos-persistent.sh
|
||||
ansible-playbook /root/openshift-ansible/playbooks/deploy_cluster.yml
|
||||
|
||||
mkdir -p /etc/origin/master && htpasswd -Bbc /etc/origin/master/htpasswd developer 4linux
|
||||
34
provision/master.sh.backup
Normal file
34
provision/master.sh.backup
Normal file
@ -0,0 +1,34 @@
|
||||
#!/bin/bash
|
||||
|
||||
/vagrant/provision/fix-centos-repos.sh
|
||||
|
||||
# Dependências
|
||||
yum install -y curl vim device-mapper-persistent-data lvm2 epel-release wget git net-tools bind-utils yum-utils iptables-services bridge-utils bash-completion kexec-tools sos psacct docker python-ipaddress PyYAML
|
||||
|
||||
systemctl start docker
|
||||
systemctl enable docker
|
||||
|
||||
for IMAGE in 'origin-node:v3.11' 'origin-pod:v3.11'; do
|
||||
docker pull "quay.io/openshift/$IMAGE"
|
||||
for IP in 20 30; do
|
||||
docker save "quay.io/openshift/$IMAGE" | ssh -o stricthostkeychecking=no root@172.27.11.$IP docker load
|
||||
done
|
||||
done
|
||||
|
||||
yum install -y java python-passlib pyOpenSSL python-jinja2 python-paramiko python-setuptools python2-cryptography sshpass
|
||||
rpm -i https://releases.ansible.com/ansible/rpm/release/epel-7-x86_64/ansible-2.6.2-1.el7.ans.noarch.rpm
|
||||
cp /vagrant/files/hosts /etc/ansible/hosts
|
||||
cp /vagrant/files/ansible.cfg /etc/ansible/ansible.cfg
|
||||
cp /vagrant/files/key /root/.ssh/id_rsa; chmod 400 /root/.ssh/id_rsa
|
||||
cp /vagrant/files/key.pub /root/.ssh/id_rsa.pub
|
||||
sed -i -e "s/#host_key_checking/host_key_checking/" /etc/ansible/ansible.cfg
|
||||
sed -i -e "s@#private_key_file = /path/to/file@private_key_file = /root/.ssh/id_rsa@" /etc/ansible/ansible.cfg
|
||||
|
||||
git clone -b release-3.11 --single-branch https://github.com/openshift/openshift-ansible /root/openshift-ansible
|
||||
cd /root/openshift-ansible
|
||||
sed -i 's/openshift.common.ip/openshift.common.public_ip/' roles/openshift_control_plane/templates/master.yaml.v1.j2
|
||||
|
||||
ansible-playbook /root/openshift-ansible/playbooks/prerequisites.yml
|
||||
ansible-playbook /root/openshift-ansible/playbooks/deploy_cluster.yml
|
||||
|
||||
htpasswd -Bbc /etc/origin/master/htpasswd developer 4linux
|
||||
11
provision/node.sh
Normal file
11
provision/node.sh
Normal file
@ -0,0 +1,11 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Fix CentOS repositories first
|
||||
/vagrant/provision/fix-centos-repos.sh
|
||||
/vagrant/provision/fix-repos-persistent.sh
|
||||
|
||||
# Dependências
|
||||
yum install -y container-selinux libsemanage-python curl vim device-mapper-persistent-data lvm2 epel-release wget git net-tools bind-utils yum-utils iptables-services bridge-utils bash-completion kexec-tools sos psacct docker python-ipaddress PyYAML
|
||||
|
||||
systemctl start docker
|
||||
systemctl enable docker
|
||||
7
provision/node.sh.backup
Normal file
7
provision/node.sh.backup
Normal file
@ -0,0 +1,7 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Dependências
|
||||
yum install -y curl vim device-mapper-persistent-data lvm2 epel-release wget git net-tools bind-utils yum-utils iptables-services bridge-utils bash-completion kexec-tools sos psacct docker python-ipaddress PyYAML
|
||||
|
||||
systemctl start docker
|
||||
systemctl enable docker
|
||||
10
provision/node.sh.backup-new
Normal file
10
provision/node.sh.backup-new
Normal file
@ -0,0 +1,10 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Fix CentOS repositories first
|
||||
/vagrant/provision/fix-centos-repos.sh
|
||||
|
||||
# Dependências
|
||||
yum install -y curl vim device-mapper-persistent-data lvm2 epel-release wget git net-tools bind-utils yum-utils iptables-services bridge-utils bash-completion kexec-tools sos psacct docker python-ipaddress PyYAML
|
||||
|
||||
systemctl start docker
|
||||
systemctl enable docker
|
||||
14
provision/provision.sh
Normal file
14
provision/provision.sh
Normal file
@ -0,0 +1,14 @@
|
||||
#!/bin/bash
|
||||
|
||||
mkdir -p /root/.ssh
|
||||
cp /vagrant/files/key.pub /root/.ssh/authorized_keys
|
||||
|
||||
HOSTS="$(head -n2 /etc/hosts)"
|
||||
echo -e "$HOSTS" > /etc/hosts
|
||||
cat >> /etc/hosts <<EOF
|
||||
172.27.11.10 okd.example.com
|
||||
172.27.11.20 node1.example.com
|
||||
172.27.11.30 node2.example.com
|
||||
172.27.11.40 extras.example.com
|
||||
EOF
|
||||
|
||||
Loading…
x
Reference in New Issue
Block a user