Multiple NIC Network Isolation - Overcloud
Network table.
Internal API : 172.16.0.0/24
Tenant: 172.17.0.0/24
Storage: 172.18.0.0/24
Storage Management: 172.19.0.0/24
External: 192.168.122.0/24
node-info.yaml
parameter_defaults:
OvercloudControlFlavor: control
OvercloudComputeFlavor: compute
OvercloudCephStorageFlavor: ceph-storage
ControllerCount: 1
ComputeCount: 1
CephStorageCount: 3
network-environment.yaml
# This template configures each role to use a pair of bonded nics (nic2 and # nic3) and configures an IP address on each relevant isolated network # for each role. This template assumes use of network-isolation.yaml. # # FIXME: if/when we add functionality to heatclient to include heat # environment files we should think about using it here to automatically # include network-isolation.yaml. resource_registry: OS::TripleO::BlockStorage::Net::SoftwareConfig: /home/stack/templates/nic-configs/cinder-storage.yaml OS::TripleO::Compute::Net::SoftwareConfig: /home/stack/templates/nic-configs/compute.yaml OS::TripleO::Controller::Net::SoftwareConfig: /home/stack/templates/nic-configs/controller.yaml OS::TripleO::ObjectStorage::Net::SoftwareConfig: /home/stack/templates/nic-configs/swift-storage.yaml OS::TripleO::CephStorage::Net::SoftwareConfig: /home/stack/templates/nic-configs/ceph-storage.yaml # This section is usually not modified, if in doubt stick to the defaults # TripleO overcloud networks OS::TripleO::Network::External: /usr/share/openstack-tripleo-heat-templates/network/external.yaml OS::TripleO::Network::InternalApi: /usr/share/openstack-tripleo-heat-templates/network/internal_api.yaml OS::TripleO::Network::StorageMgmt: /usr/share/openstack-tripleo-heat-templates/network/storage_mgmt.yaml OS::TripleO::Network::Storage: /usr/share/openstack-tripleo-heat-templates/network/storage.yaml OS::TripleO::Network::Tenant: /usr/share/openstack-tripleo-heat-templates/network/tenant.yaml OS::TripleO::Network::Management: /usr/share/openstack-tripleo-heat-templates/network/management.yaml
# Port assignments for the VIPs OS::TripleO::Network::Ports::ExternalVipPort: /usr/share/openstack-tripleo-heat-templates/network/ports/external.yaml OS::TripleO::Network::Ports::InternalApiVipPort: /usr/share/openstack-tripleo-heat-templates/network/ports/internal_api.yaml OS::TripleO::Network::Ports::StorageVipPort: /usr/share/openstack-tripleo-heat-templates/network/ports/storage.yaml OS::TripleO::Network::Ports::StorageMgmtVipPort: /usr/share/openstack-tripleo-heat-templates/network/ports/storage_mgmt.yaml OS::TripleO::Network::Ports::TenantVipPort: /usr/share/openstack-tripleo-heat-templates/network/ports/tenant.yaml OS::TripleO::Network::Ports::ManagementVipPort: /usr/share/openstack-tripleo-heat-templates/network/ports/management.yaml OS::TripleO::Network::Ports::RedisVipPort: /usr/share/openstack-tripleo-heat-templates/network/ports/vip.yaml
# Port assignments for the controller role OS::TripleO::Controller::Ports::ExternalPort: /usr/share/openstack-tripleo-heat-templates/network/ports/external.yaml OS::TripleO::Controller::Ports::InternalApiPort: /usr/share/openstack-tripleo-heat-templates/network/ports/internal_api.yaml OS::TripleO::Controller::Ports::StoragePort: /usr/share/openstack-tripleo-heat-templates/network/ports/storage.yaml OS::TripleO::Controller::Ports::StorageMgmtPort: /usr/share/openstack-tripleo-heat-templates/network/ports/storage_mgmt.yaml OS::TripleO::Controller::Ports::TenantPort: /usr/share/openstack-tripleo-heat-templates/network/ports/tenant.yaml OS::TripleO::Controller::Ports::ManagementPort: /usr/share/openstack-tripleo-heat-templates/network/ports/management.yaml
# Port assignments for the compute role OS::TripleO::Compute::Ports::InternalApiPort: /usr/share/openstack-tripleo-heat-templates/network/ports/internal_api.yaml OS::TripleO::Compute::Ports::StoragePort: /usr/share/openstack-tripleo-heat-templates/network/ports/storage.yaml OS::TripleO::Compute::Ports::TenantPort: /usr/share/openstack-tripleo-heat-templates/network/ports/tenant.yaml OS::TripleO::Compute::Ports::ManagementPort: /usr/share/openstack-tripleo-heat-templates/network/ports/management.yaml
# Port assignments for the ceph storage role OS::TripleO::CephStorage::Ports::StoragePort: /usr/share/openstack-tripleo-heat-templates/network/ports/storage.yaml OS::TripleO::CephStorage::Ports::StorageMgmtPort: /usr/share/openstack-tripleo-heat-templates/network/ports/storage_mgmt.yaml OS::TripleO::CephStorage::Ports::ManagementPort: /usr/share/openstack-tripleo-heat-templates/network/ports/management.yaml
# Port assignments for the swift storage role OS::TripleO::SwiftStorage::Ports::InternalApiPort: /usr/share/openstack-tripleo-heat-templates/network/ports/internal_api.yaml OS::TripleO::SwiftStorage::Ports::StoragePort: /usr/share/openstack-tripleo-heat-templates/network/ports/storage.yaml OS::TripleO::SwiftStorage::Ports::StorageMgmtPort: /usr/share/openstack-tripleo-heat-templates/network/ports/storage_mgmt.yaml OS::TripleO::SwiftStorage::Ports::ManagementPort: /usr/share/openstack-tripleo-heat-templates/network/ports/management.yaml
# Port assignments for the block storage role OS::TripleO::BlockStorage::Ports::InternalApiPort: /usr/share/openstack-tripleo-heat-templates/network/ports/internal_api.yaml OS::TripleO::BlockStorage::Ports::StoragePort: /usr/share/openstack-tripleo-heat-templates/network/ports/storage.yaml OS::TripleO::BlockStorage::Ports::StorageMgmtPort: /usr/share/openstack-tripleo-heat-templates/network/ports/storage_mgmt.yaml OS::TripleO::BlockStorage::Ports::ManagementPort: /usr/share/openstack-tripleo-heat-templates/network/ports/management.yaml parameter_defaults: ServiceNetMap: NeutronTenantNetwork: tenant CeilometerApiNetwork: internal_api AodhApiNetwork: internal_api GnocchiApiNetwork: internal_api MongoDbNetwork: internal_api CinderApiNetwork: internal_api CinderIscsiNetwork: storage GlanceApiNetwork: storage GlanceRegistryNetwork: internal_api KeystoneAdminApiNetwork: ctlplane # Admin connection for Undercloud KeystonePublicApiNetwork: internal_api NeutronApiNetwork: internal_api HeatApiNetwork: internal_api NovaApiNetwork: internal_api NovaMetadataNetwork: internal_api NovaVncProxyNetwork: internal_api SwiftMgmtNetwork: storage # Changed from storage_mgmt SwiftProxyNetwork: storage SaharaApiNetwork: internal_api HorizonNetwork: internal_api MemcachedNetwork: internal_api RabbitMqNetwork: internal_api RedisNetwork: internal_api MysqlNetwork: internal_api CephClusterNetwork: storage # Changed from storage_mgmt CephPublicNetwork: storage ControllerHostnameResolveNetwork: internal_api ComputeHostnameResolveNetwork: internal_api BlockStorageHostnameResolveNetwork: internal_api ObjectStorageHostnameResolveNetwork: internal_api CephStorageHostnameResolveNetwork: storage InternalApiNetCidr: 172.16.0.0/24 TenantNetCidr: 172.17.0.0/24 StorageNetCidr: 172.18.0.0/24 StorageMgmtNetCidr: 172.19.0.0/24 ManagementNetCidr: 172.20.0.0/24 ExternalNetCidr: 192.168.122.0/24 InternalApiAllocationPools: [{'start': '172.16.0.10', 'end': '172.16.0.200'}] TenantAllocationPools: [{'start': '172.17.0.10', 'end': '172.17.0.200'}] StorageAllocationPools: [{'start': '172.18.0.10', 'end': '172.18.0.200'}] StorageMgmtAllocationPools: [{'start': '172.19.0.10', 'end': '172.19.0.200'}] # Leave room for floating IPs in the External allocation pool ExternalAllocationPools: [{'start': '192.168.122.200', 'end': '192.168.122.230'}] # Set to the router gateway on the external network ExternalInterfaceDefaultRoute: 192.168.122.1 # Gateway router for the provisioning network (or Undercloud IP) ControlPlaneDefaultRoute: 192.168.24.1 # The IP address of the EC2 metadata server. Generally the IP of the Undercloud EC2MetadataIp: 192.168.24.1 # Define the DNS servers (maximum 2) for the overcloud nodes DnsServers: ["8.8.8.8","8.8.4.4"] InternalApiNetworkVlanID: 201 StorageNetworkVlanID: 202 StorageMgmtNetworkVlanID: 203 TenantNetworkVlanID: 204 ManagementNetworkVlanID: 205 ExternalNetworkVlanID: 100 ManagementInterfaceDefaultRoute: 192.168.24.1 NeutronExternalNetworkBridge: "''"
storage-environment.yaml
A Heat environment file which can be used to set up storage ## backends. Defaults to Ceph used as a backend for Cinder, Glance and ## Nova ephemeral storage. resource_registry: OS::TripleO::Services::CephMon: ../puppet/services/ceph-mon.yaml OS::TripleO::Services::CephOSD: ../puppet/services/ceph-osd.yaml OS::TripleO::Services::CephClient: ../puppet/services/ceph-client.yaml
parameter_defaults:
#### BACKEND SELECTION ####
## Whether to enable iscsi backend for Cinder. CinderEnableIscsiBackend: false ## Whether to enable rbd (Ceph) backend for Cinder. CinderEnableRbdBackend: true ## Cinder Backup backend can be either 'ceph' or 'swift'. CinderBackupBackend: ceph ## Whether to enable NFS backend for Cinder. # CinderEnableNfsBackend: false ## Whether to enable rbd (Ceph) backend for Nova ephemeral storage. NovaEnableRbdBackend: true ## Glance backend can be either 'rbd' (Ceph), 'swift' or 'file'. GlanceBackend: rbd ## Gnocchi backend can be either 'rbd' (Ceph), 'swift' or 'file'. GnocchiBackend: rbd ExtraConfig: ceph::profile::params::osds: '/dev/sdb': {}
#### CINDER NFS SETTINGS ####
## NFS mount options # CinderNfsMountOptions: '' ## NFS mount point, e.g. '192.168.122.1:/export/cinder' # CinderNfsServers: ''
#### GLANCE NFS SETTINGS ####
## Make sure to set `GlanceBackend: file` when enabling NFS ## ## Whether to make Glance 'file' backend a NFS mount # GlanceNfsEnabled: false ## NFS share for image storage, e.g. '192.168.122.1:/export/glance' ## (If using IPv6, use both double- and single-quotes, ## e.g. "'[fdd0::1]:/export/glance'") # GlanceNfsShare: '' ## Mount options for the NFS image storage mount point # GlanceNfsOptions: 'intr,context=system_u:object_r:glance_var_lib_t:s0'
#### CEPH SETTINGS ####
## When deploying Ceph Nodes through the oscplugin CLI, the following ## parameters are set automatically by the CLI. When deploying via ## heat stack-create or ceph on the controller nodes only, ## they need to be provided manually.
## Number of Ceph storage nodes to deploy # CephStorageCount: 0 ## Ceph FSID, e.g. '4b5c8c0a-ff60-454b-a1b4-9747aa737d19' # CephClusterFSID: '' ## Ceph monitor key, e.g. 'AQC+Ox1VmEr3BxAALZejqeHj50Nj6wJDvs96OQ==' # CephMonKey: '' ## Ceph admin key, e.g. 'AQDLOh1VgEp6FRAAFzT7Zw+Y9V6JJExQAsRnRQ==' # CephAdminKey: '' ## Ceph client key, e.g 'AQC+vYNXgDAgAhAAc8UoYt+OTz5uhV7ItLdwUw==' # CephClientKey: ''
Execute the overcloud deployment from the director.
$ openstack overcloud deploy --templates -e templates/node-info.yaml -e templates/network-environment.yaml -e templates/storage-environment.yaml --libvirt-type qemu
NOTE: The deployment was done on the KVM virtual machine, hence `--libvirt-type qemu`.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Comments
Post a Comment