文章目录
  1. 1. 前言
  2. 2. 更新记录
  3. 3. 参考资料
  4. 4. 实验环境
  5. 5. OpenStack 组件
  6. 6. OpenStack
  7. 7. Neutron SDN
  8. 8. OpenStack PackStack
  9. 9. OpenStack测试
  10. 10. OpenStack开发

前言

为期6天的Red Hat OpenStack培训受益良多,实验过程也非常丰满,有很多知识需要吸收和消化,文章中把详细的实验过程全部分享出来。实验环境是通过Vagrant部署rhel7.1快速生成,使用4G以上笔记本电脑就可以完成搭建。所有的配置参数都有备注说明,希望可以帮助大家快速学习和熟悉OpenStack。这或许也是我Blog中最长的一篇凑字数文章。

OpenStack是代替VMware私有云的最佳方案之一

更新记录

2016年03月05日 - 增加slideshare ppt分享,补充OpenStack测试和开发
2016年03月04日 - 初稿

阅读原文 - https://wsgzao.github.io/post/openstack/

扩展阅读

参考资料

1
2
3
4
5
6
7
8
9
10
11
12
13
14
file://E:\all-in-one (2 folders, 0 files, 0 bytes, 486.59 MB in total.)
├─docs (1 folders, 6 files, 18.04 MB, 18.08 MB in total.)
│ │ classroom.pptx 8.42 MB
│ │ env_cfg.txt 338 bytes
│ │ note.sh 48.57 KB
│ │ OpenStack Installation Guide (EL7 ver.).pdf 1.52 MB
│ │ Red Hat Enterprise Linux OpenStack Platform 7 Installation Reference en-US.pdf 8.05 MB
│ │ ~$classroom.pptx 165 bytes
│ └─packstack网络配置文件 (0 folders, 1 files, 41.42 KB, 41.42 KB in total.)
│ packstack-answers.txt 41.42 KB
└─env_for_windows (1 folders, 2 files, 468.51 MB, 468.51 MB in total.)
│ rhel-7.1-x86_64.box 468.51 MB
│ Vagrantfile 3.01 KB
└─.vagrant (0 folders, 0 files, 0 bytes, 0 bytes in total.)

浏览slideshare可能要翻墙,如需帮助可参考《GFW翻墙小结》

  1. Red Hat OpenStack Platform 7 Training.pptx
  2. OpenStack实战指南.pdf

实验环境

版本
OS:Windows 10 x86_64
VirtualBox:VirtualBox-5.0.14-105127-Win
Vagrant:vagrant_1.8.1
Terminal:NetSarang.Xmanager.Enterprise.5

备注

  1. BIOS开启VT
  2. 使用管理员权限执行下面的步骤

搭建步骤

  1. 安装 Oracle VirtualBox
  2. 安装 Vagrant
  3. 在工作目录下E:\vagrantbox,命令行:vagrant init (生成 Vagrantfile,需复制并修改模板)
  4. 配置 Oracle VirtualBox, 使网络与 Vagrantfile 中指定的一致
  5. 在工作目录下,命令行:vagrant up(启动),vagrant halt(优雅关机)
  6. 每日实验完成建议关机做一次snapshot快照备份,不希望关机可以选择保存状态类似VMware suspend挂起
1
2
3
4
5
6
cd E:\vagrantbox
vagrant init
vagrant box add rhle-7.1 rhel-7.1-x86_64.box
vagrant box list
vagrant up {node1|node2|node3}
vagrant destroy {node1|node2|node3}

修改Vagrantfile

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
# -*- mode: ruby -*-
# vi: set ft=ruby :

# All Vagrant configuration is done below. The "2" in Vagrant.configure
# configures the configuration version (we support older styles for
# backwards compatibility). Please don't change it unless you know what
# you're doing.

$script = <<SCRIPT
echo doing provision...
if [ ! $LANG = "en_US.UTF-8" ]; then
echo "export LC_ALL=en_US.UTF-8" >> /root/.bashrc
fi
echo -e "10.30.0.10\tnode-1.example.com\tnode-1\n10.30.0.11\tnode-2.example.com\tnode-2\n10.30.0.12\tnode-3.example.com\tnode-3" >> /etc/hosts
echo -e "192.168.0.10\tnode-1.example.com\tnode-1\n192.168.0.11\tnode-2.example.com\tnode-2\n192.168.0.12\tnode-3.example.com\tnode-3" >> /etc/hosts
rm -f /etc/yum.repos.d/*


yum install -y wget && wget -O /etc/yum.repos.d/rh-openstack-7-el7.repo http://192.168.1.100/content/cfgfile/rh-openstack-7-el7.repo
yum install -y wget && wget -O /etc/yum.repos.d/rh-rhel-7-el7.repo http://192.168.1.100/content/cfgfile/rh-rhel-7-el7.repo

echo done...
SCRIPT

Vagrant.configure(2) do |config|
config.vm.define :node1 do |node1|
node1.vm.box = "rhel-7.1"
node1.vm.provision "shell", inline: $script
node1.vm.hostname = "node-1.example.com"
node1.vm.network "forwarded_port", guest: 80, host: 18080
node1.vm.network "forwarded_port", guest: 22, host: 12222
node1.vm.provider :virtualbox do |v|
v.name = "node1.demo1"
v.memory = 2048
v.cpus = 2
end
node1.vm.network :private_network, ip: "10.30.0.10", auto_config: true
node1.vm.network :private_network, ip: "192.168.0.10", auto_config: true
end
config.vm.define :node2 do |node2|
node2.vm.box = "rhel-7.1"
node2.vm.provision "shell", inline: $script
node2.vm.hostname = "node-2.example.com"
node2.vm.provider :virtualbox do |v|
v.name = "node2.demo1"
v.memory = 1024
v.cpus = 1
end
node2.vm.network :private_network, ip: "10.30.0.11", auto_config: true
node2.vm.network :private_network, ip: "192.168.0.11", auto_config: true
end
config.vm.define :node3 do |node3|
node3.vm.box = "rhel-7.1"
node3.vm.provision "shell", inline: $script
node3.vm.hostname = "node-3.example.com"
node3.vm.provider :virtualbox do |v|
v.name = "node3.demo1"
v.memory = 1024
v.cpus = 1
end
node3.vm.network :private_network, ip: "10.30.0.12", auto_config: true
node3.vm.network :private_network, ip: "192.168.0.12", auto_config: true
end
end

OpenStack 组件

Code Name OpenStack Service Description
Keystone Indentity 提供其它 OpenStack 服务的身份验证和授权服务。提供其它OpenStack 服务的 endpoint 列表。
Glance Image 存取虚拟机磁盘镜像。OpenStack Compute 在实例资源调配时利用此特性。
Nova Compute 管理 OpenStack 环境中的计算实例的生命周期。职责包括依据需求生成、调度、和终止实例。
Neutron Networking 将网络连接作为服务提供给其它 OpenStack 服务,例如 OpenStack Compute。提供用户定义网络的 API 和连接它们的附加组件,其可插拔的架构设计支持很多主流网络供应商及技术。
Cinder Block Storage 对正在运行的实例提供持久性的数据块存储。其可插拔驱动架构设计有助于创建和管理块存储设备。
Swift Object Storage 通过基于 REST、HTTP 的 API 存取任何非结构化的数据对象。数据复写和扩展架构使其具备高度的容错性。这种机制与可挂载目录的文件服务器并不一致。
Heat Orchestration 利用 OpenStack 本机 REST API 及 CloudFormation 兼容的查询 API,通过本机 HOT 模板格式或 AWS CloudFormation 模板格式编排多个复合云应用程序。
Horizon Dashboard 提供与底层 OpenStack 服务进行交互的基于 Web 的自助门户,包括启动实例、分配 IP 地址、设置访问控制。
Ceilometer Telemetry 出于计费、基准审查、可扩展性、及统计方面的目的,用于监测和计量 OpenStack 云。

实验环境

节点 IP 功能
node-1 192.168.0.10 Controller Node, Network Node
node-2 192.168.0.11 Compute Node 1
node-3 192.168.0.12 Compute Node 2

OpenStack

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
#===========================================================
# Preinstallation
#===========================================================

#>>>>>>>>>>>>>>>>>>>> Environment

node-1 | 192.168.0.10 | Controller Node, Network Node
node-2 | 192.168.0.11 | Compute Node 1
node-3 | 192.168.0.12 | Compute Node 2

#Repository
cd /etc/yum.repos.d

wget http://192.168.1.100/content/repofiles/rh-ceph-1-el7.repo
wget http://192.168.1.100/content/repofiles/rh-rhel-7-el7.repo
wget http://192.168.1.100/content/repofiles/rh-rhelosp-7-el7.repo

yum clean all
yum repolist

yum install -y net-tools

#hosts
cat /etc/hosts

127.0.0.1 node-1.example.com node-1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
10.30.0.10 node-1.example.com node-1
10.30.0.11 node-2.example.com node-2
10.30.0.12 node-3.example.com node-3
192.168.0.10 node-1.example.com node-1
192.168.0.11 node-2.example.com node-2
192.168.0.12 node-3.example.com node-3

#ping测试节点
ping node-1
ping node-2
ping node-3

#禁用selinux
setenforce 0
sed -i 's/^SELINUX=.*/SELINUX=permissive/' /etc/selinux/config
yum install -y ntp

#编辑ntp
vi /etc/ntp.conf

#主控节点
server 127.127.1.0 iburst
#其他节点指向10.30.0.10
server 10.30.0.10 iburst

systemctl enable ntpd.service
systemctl start ntpd.service

#禁用firewall
systemctl stop firewalld.service
systemctl disable firewalld.service

#禁用NetworkManager
systemctl disable NetworkManager
systemctl stop NetworkManager

#ntp客户端同步
ntpdate -u 10.30.0.10

#主控节点安装数据库
yum install -y mariadb mariadb-server MySQL-python

#配置mariadb
vi /etc/my.cnf.d/mariadb_openstack.cnf

[mysqld]
bind-address = 0.0.0.0
default-storage-engine = innodb
innodb_file_per_table
collation-server = utf8_general_ci
init-connect = 'SET NAMES utf8'
character-set-server = utf8

#启动mariadb
systemctl enable mariadb.service
systemctl start mariadb.service
mysql_secure_installation
[enter for none]
[y]
redhat
[y]

#安装rabbitmq
yum install -y rabbitmq-server
systemctl enable rabbitmq-server.service
systemctl start rabbitmq-server.service
setenforce 0

#建立keystone数据库
mysql -u root -predhat

CREATE DATABASE keystone;
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'redhat';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'redhat';
EXIT

#===========================================================
# OpenStack Installation
#===========================================================

#>>>>>>>>>>>>>>>>>>>> Keystone Installation

#安装Keystone
yum install -y openstack-keystone httpd mod_wsgi python-openstackclient memcached python-memcached
systemctl enable memcached.service
systemctl start memcached.service

#配置keystone参数
vi /etc/keystone/keystone.conf

[DEFAULT]
...
admin_token = redhat
verbose = True
[database]
...
connection = mysql://keystone:redhat@10.30.0.10/keystone
[memcache]
...
servers = localhost:11211
[token]
...
provider = keystone.token.providers.uuid.Provider
driver = keystone.token.persistence.backends.memcache.Token
[revoke]
...
driver = keystone.contrib.revoke.backends.sql.Revoke

#初始化keystone数据库
su -s /bin/sh -c "keystone-manage db_sync" keystone
tail -f /var/log/keystone/keystone.log

#配置servername
vi /etc/httpd/conf/httpd.conf

ServerName node-1

#配置keystone虚拟机
vi /etc/httpd/conf.d/wsgi-keystone.conf

Listen 5000
Listen 35357
<VirtualHost *:5000>
WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-public
WSGIScriptAlias / /var/www/cgi-bin/keystone/main
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
LogLevel info
ErrorLogFormat "%{cu}t %M"
ErrorLog /var/log/httpd/keystone-error.log
CustomLog /var/log/httpd/keystone-access.log combined
</VirtualHost>
<VirtualHost *:35357>
WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-admin
WSGIScriptAlias / /var/www/cgi-bin/keystone/admin
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
LogLevel info
ErrorLogFormat "%{cu}t %M"
ErrorLog /var/log/httpd/keystone-error.log
CustomLog /var/log/httpd/keystone-access.log combined
</VirtualHost>

#创建keystone启动文件
mkdir -p /var/www/cgi-bin/keystone
touch /var/www/cgi-bin/keystone/{main,admin}

#两个文件输入相同内容配置wsgi
vi /var/www/cgi-bin/keystone/main
vi /var/www/cgi-bin/keystone/admin

import os
from keystone.server import wsgi as wsgi_server
name = os.path.basename(__file__)
application = wsgi_server.initialize_application(name)

#调整权限
chown -R keystone:keystone /var/www/cgi-bin/keystone
chmod 755 /var/www/cgi-bin/keystone/*

#启动httpd服务
systemctl enable httpd.service
systemctl start httpd.service
tail -f /var/log/httpd/keystone-error.log
ps -ef|grep http
ps -ef|grep keystone
netstat -ntlp|grep 5000
netstat -ntlp|grep 35357

#设置临时token
export OS_TOKEN=redhat
export OS_URL=http://10.30.0.10:35357/v2.0
export | grep OS

openstack service create --name keystone --description "OpenStack Identity" identity
openstack endpoint create --publicurl http://10.30.0.10:5000/v2.0 --internalurl http://10.30.0.10:5000/v2.0 --adminurl http://10.30.0.10:35357/v2.0 --region RegionOne identity
openstack project create --description "Admin Project" admin
openstack user create --password-prompt admin
User Password:[redhat]
Repeat User Password:[redhat]

openstack role create admin
openstack role add --project admin --user admin admin
openstack project create --description "Service Project" service

openstack project create --description "Demo Project" demo
openstack user create --password-prompt demo
User Password:[redhat]
Repeat User Password:[redhat]

openstack role create user
openstack role add --project demo --user demo user
unset OS_TOKEN OS_URL
export | grep OS

#<<<<<<<<<<<<<<<<<<<< Verification >>>>>>>>>>>>>>>>>>>>
openstack --os-auth-url http://10.30.0.10:35357 --os-project-name admin --os-username admin --os-auth-type password token issue
ps -ef|grep memcached
systemctl start memcached.service

#编辑admin-openrc
vi /root/admin-openrc.sh

export OS_PROJECT_DOMAIN_ID=default
export OS_USER_DOMAIN_ID=default
export OS_PROJECT_NAME=admin
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=redhat
export OS_AUTH_URL=http://10.30.0.10:35357/v3
export PS1='[\u@\h \W(admin)]\$ '

source /root/admin-openrc.sh
openstack user list

+----------------------------------+---------+
| ID | Name |
+----------------------------------+---------+
| 3a90eefdd4af4f64b2b55e5787c8b480 | nova |
| 81fda8223f424eacb358b9ecf30d8514 | demo |
| afc0c39b498044a6b012ca85e1a82e3e | cinder |
| c66ab63dac7142438383f880ceae77e3 | neutron |
| de0c3f0303394b3e8d127119721e7f6b | admin |
| e6a433779b8749d689e4f8d47026028c | glance |
+----------------------------------+---------+

#编辑demo-openrc
vi /root/demo-openrc.sh

export OS_PROJECT_DOMAIN_ID=default
export OS_USER_DOMAIN_ID=default
export OS_PROJECT_NAME=demo
export OS_TENANT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=redhat
export OS_AUTH_URL=http://10.30.0.10:5000/v3
export PS1='[\u@\h \W(demo)]\$ '

source /root/demo-openrc.sh
openstack token issue

+------------+----------------------------------+
| Field | Value |
+------------+----------------------------------+
| expires | 2016-03-02T02:25:24.285395Z |
| id | 06c106e9ffb747a1812e9b4de73b6977 |
| project_id | f9ae9069651e40d0baaa7ad4e7d1a160 |
| user_id | de0c3f0303394b3e8d127119721e7f6b |
+------------+----------------------------------+

#>>>>>>>>>>>>>>>>>>>> Glance Installation

#创建glance数据库和用户
mysql -u root -p

CREATE DATABASE glance;
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'redhat';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'redhat';
EXIT

source /root/admin-openrc.sh
#set the password to 'redhat'
openstack user create --password-prompt glance
openstack role add --project service --user glance admin
openstack service create --name glance --description "OpenStack Image service" image
openstack endpoint create --publicurl http://10.30.0.10:9292 --internalurl http://10.30.0.10:9292 --adminurl http://10.30.0.10:9292 --region RegionOne image

#安装glance
yum install -y openstack-glance python-glance python-glanceclient

#配置glance-api
vi /etc/glance/glance-api.conf

[DEFAULT]
...
verbose = True
notification_driver = noop
[database]
...
connection = mysql://glance:redhat@10.30.0.10/glance
[keystone_authtoken]
...
auth_uri = http://10.30.0.10:5000
auth_url = http://10.30.0.10:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = glance
password = redhat
[paste_deploy]
...
flavor = keystone
[glance_store]
...
default_store = file
filesystem_store_datadir = /var/lib/glance/images/

#配置glance-registry
vi /etc/glance/glance-registry.conf

[DEFAULT]
...
verbose = True
notification_driver = noop
[database]
...
connection = mysql://glance:redhat@10.30.0.10/glance
[keystone_authtoken]
...
auth_uri = http://10.30.0.10:5000
auth_url = http://10.30.0.10:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = glance
password = redhat
[paste_deploy]
...
flavor = keystone

#生成glance数据
su -s /bin/sh -c "glance-manage db_sync" glance
systemctl enable openstack-glance-api.service openstack-glance-registry.service
systemctl start openstack-glance-api.service openstack-glance-registry.service

#<<<<<<<<<<<<<<<<<<<< Verification >>>>>>>>>>>>>>>>>>>>
cd /root
echo "export OS_IMAGE_API_VERSION=2" | tee -a admin-openrc.sh demo-openrc.sh
source admin-openrc.sh
glance image-list
mkdir /tmp/image-list
wget -P /tmp/images http://192.168.1.100/content/images/vdisk/cirros-0.3.4-x86_64-disk.img
glance image-create --name "cirros-in-fs" --file /tmp/images/cirros-0.3.4-x86_64-disk.img --disk-format qcow2 --container-format bare --visibility public --progress
glance image-list
rm -r /tmp/images

+--------------------------------------+--------------+
| ID | Name |
+--------------------------------------+--------------+
| 6bbafd22-6791-4a9d-8240-355a64f8a5c1 | cirros-in-fs |
+--------------------------------------+--------------+

#>>>>>>>>>>>>>>>>>>>> Nova Installation

#创建nova数据库
mysql -u root -p

CREATE DATABASE nova;
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'redhat';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'redhat';
EXIT

source admin-openrc.sh
#Set the password to 'redhat'
openstack user create --password-prompt nova
openstack role add --project service --user nova admin
openstack service create --name nova --description "OpenStack Compute" compute
openstack endpoint create --publicurl http://10.30.0.10:8774/v2/%\(tenant_id\)s --internalurl http://10.30.0.10:8774/v2/%\(tenant_id\)s --adminurl http://10.30.0.10:8774/v2/%\(tenant_id\)s --region RegionOne compute

#安装nova
yum install -y openstack-nova-api openstack-nova-cert openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler python-novaclient

#配置nova
vi /etc/nova/nova.conf

[DEFAULT]
...
verbose = True
rpc_backend = rabbit
auth_strategy = keystone
my_ip = 10.30.0.10
vncserver_listen = 10.30.0.10
vncserver_proxyclient_address = 10.30.0.10
[database]
...
connection = mysql://nova:redhat@10.30.0.10/nova
[oslo_messaging_rabbit]
...connection/connection/
rabbit_host = 10.30.0.10
rabbit_userid = guest
rabbit_password = guest
[keystone_authtoken]
...
auth_uri = http://10.30.0.10:5000
auth_url = http://10.30.0.10:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = nova
password = redhat
[glance]
...
host = 10.30.0.10
[oslo_concurrency]
...
lock_path = /var/lib/nova/tmp

#生成nova数据
su -s /bin/sh -c "nova-manage db sync" nova
systemctl enable openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service \
openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
systemctl start openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service \
openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service

#>>>>>>>>>>>>>>>>>>>> On Compute Nodes

#在其他计算节点安装nova-compute
yum install -y openstack-nova-compute sysfsutils

#配置nova
vi /etc/nova/nova.conf

[DEFAULT]
...
verbose = True
rpc_backend = rabbit
auth_strategy = keystone
# set to your local ip on management network
my_ip = 10.30.0.11
vnc_enabled = True
vncserver_listen = 0.0.0.0
# set to your local ip on management network
vncserver_proxyclient_address = 10.30.0.11
novncproxy_base_url = http://10.30.0.10:6080/vnc_auto.html
[oslo_messaging_rabbit]
...
rabbit_host = 10.30.0.10
rabbit_userid = guest
rabbit_password = guest
[keystone_authtoken]
...
auth_uri = http://10.30.0.10:5000
auth_url = http://10.30.0.10:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = nova
password = redhat
[glance]
...
host = 10.30.0.10
[oslo_concurrency]
...
lock_path = /var/lib/nova/tmp

#启动nova-compute服务
RETR=$(egrep -c '(vmx|svm)' /proc/cpuinfo); if [ $RETR -eq 0 ]; then sed -i 's/^#virt_type=.*/virt_type=qemu/' /etc/nova/nova.conf; fi
firewall-cmd --set-default-zone=trusted
firewall-cmd --reload
systemctl enable libvirtd.service openstack-nova-compute.service
systemctl start libvirtd.service openstack-nova-compute.service

#<<<<<<<<<<<<<<<<<<<< Verification >>>>>>>>>>>>>>>>>>>>

#Go back to your Controller Node
cd /root
source admin-openrc.sh
nova service-list
nova endpoints
nova image-list

+----+------------------+--------------------+----------+---------+-------+----------------------------+-----------------+
| Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+----+------------------+--------------------+----------+---------+-------+----------------------------+-----------------+
| 1 | nova-consoleauth | node-1.example.com | internal | enabled | up | 2016-03-02T01:31:52.000000 | - |
| 2 | nova-conductor | node-1.example.com | internal | enabled | up | 2016-03-02T01:31:52.000000 | - |
| 3 | nova-cert | node-1.example.com | internal | enabled | up | 2016-03-02T01:31:52.000000 | - |
| 4 | nova-scheduler | node-1.example.com | internal | enabled | up | 2016-03-02T01:31:51.000000 | - |
| 5 | nova-compute | node-2.example.com | nova | enabled | up | 2016-03-02T01:31:52.000000 | - |
| 6 | nova-compute | node-3.example.com | nova | enabled | up | 2016-03-02T01:31:53.000000 | - |
+----+------------------+--------------------+----------+---------+-------+----------------------------+-----------------+


#>>>>>>>>>>>>>>>>>>>> Neutron Installation

#On All of Your OpenStack Nodes
#重启网络
systemctl stop NetworkManager.service; systemctl disable NetworkManager.service
systemctl restart network.service

#On Controller (Network) Node
vi /etc/sysctl.conf

net.ipv4.ip_forward=1
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0

sysctl -p

#创建neutron数据库
mysql -u root -p
CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'redhat';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'redhat';
EXIT

source admin-openrc.sh
# Set the password to 'redhat'
openstack user create --password-prompt neutron
openstack role add --project service --user neutron admin
openstack service create --name neutron --description "OpenStack Networking" network
openstack endpoint create --publicurl http://10.30.0.10:9696 --adminurl http://10.30.0.10:9696 --internalurl http://10.30.0.10:9696 --region RegionOne network

#安装neutron
yum install -y openstack-neutron openstack-neutron-ml2 python-neutronclient openstack-neutron-openvswitch

#配置neutron
vi /etc/neutron/neutron.conf

[DEFAULT]
core_plugin = ml2
service_plugins = router
auth_strategy = keystone
allow_overlapping_ips = True
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
nova_url = http://10.30.0.10:8774/v2
rpc_backend=rabbit
[matchmaker_redis]
[matchmaker_ring]
[quotas]
[agent]
[keystone_authtoken]
admin_tenant_name = %SERVICE_TENANT_NAME%
admin_user = %SERVICE_USER%
admin_password = %SERVICE_PASSWORD%
auth_uri = http://10.30.0.10:5000
auth_url = http://10.30.0.10:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = neutron
password = redhat
[database]
connection = mysql://neutron:redhat@10.30.0.10/neutron
[nova]
auth_url = http://10.30.0.10:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
region_name = RegionOne
project_name = service
username = nova
password = redhat
[oslo_concurrency]
[oslo_policy]
[oslo_messaging_amqp]
[oslo_messaging_qpid]
[oslo_messaging_rabbit]
rabbit_host = 10.30.0.10
rabbit_userid = guest
rabbit_password = guest

#配置ml2_conf
vi /etc/neutron/plugins/ml2/ml2_conf.ini

[ml2]
type_drivers = flat,gre,vxlan
tenant_network_types = vxlan
mechanism_drivers = openvswitch
[ml2_type_flat]
flat_networks = public
[ml2_type_vlan]
[ml2_type_gre]
[ml2_type_vxlan]
vni_ranges = 1:10000
[securitygroup]
enable_security_group = True
enable_ipset = True
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
[ovs]
local_ip = 192.168.0.10
bridge_mappings = public:br-ex
[agent]
tunnel_types = vxlan

#配置ovs_neutron_plugin
vi /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini

[ovs]
integration_bridge = br-int
tunnel_bridge = br-tun
local_ip = 192.168.0.10
bridge_mappings = public:br-ex
[agent]
tunnel_types = vxlan
[securitygroup]

#配置nova
vi /etc/nova/nova.conf

[DEFAULT]
rpc_backend=rabbit
my_ip=10.30.0.10
auth_strategy=keystone
vncserver_listen=10.30.0.10
vncserver_proxyclient_address=10.30.0.10
network_api_class = nova.network.neutronv2.api.API
security_group_api = neutron
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[api_database]
[barbican]
[cells]
[cinder]
[conductor]
[database]
connection = mysql://nova:redhat@10.30.0.10/nova
[ephemeral_storage_encryption]
[glance]
host = 10.30.0.10
[guestfs]
[hyperv]
[image_file_url]
[ironic]
[keymgr]
[keystone_authtoken]
auth_uri = http://10.30.0.10:5000
auth_url = http://10.30.0.10:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = nova
password = redhat
[libvirt]
[metrics]
[neutron]
url = http://10.30.0.10:9696
auth_strategy = keystone
admin_auth_url = http://10.30.0.10:35357/v2.0
admin_tenant_name = service
admin_username = neutron
admin_password = redhat
service_metadata_proxy = True
metadata_proxy_shared_secret = redhat
[osapi_v3]
[rdp]
[serial_console]
[spice]
[ssl]
[trusted_computing]
[upgrade_levels]
[vmware]
[workarounds]
[xenserver]
[zookeeper]
[matchmaker_redis]
[matchmaker_ring]
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[oslo_messaging_amqp]
[oslo_messaging_qpid]
[oslo_messaging_rabbit]
rabbit_host = 10.30.0.10
rabbit_userid = guest
rabbit_password = guest

#重启nova服务
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
systemctl restart openstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service
systemctl enable neutron-server.service; systemctl start neutron-server.service

#配置l3_agent
vi /etc/neutron/l3_agent.ini

[DEFAULT]
verbose = True
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
external_network_bridge =
router_delete_namespaces = True

#配置dhcp_agent
vi /etc/neutron/dhcp_agent.ini

[DEFAULT]
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
dhcp_delete_namespaces = True
dnsmasq_config_file = /etc/neutron/dnsmasq-neutron.conf

#配置dnsmasq-neutron
vi /etc/neutron/dnsmasq-neutron.conf

dhcp-option-force=26,1454

#杀掉dnsmasq进程
pkill dnsmasq

#配置metadata_agent
vi /etc/neutron/metadata_agent.ini

[DEFAULT]
auth_uri = http://10.30.0.10:5000
auth_url = http://10.30.0.10:35357
auth_region = RegionOne
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = neutron
password = redhat
nova_metadata_ip = 10.30.0.10
metadata_proxy_shared_secret = redhat

#重启openvswitch
openstack-service restart
systemctl enable openvswitch.service
systemctl start openvswitch.service
ovs-vsctl add-br br-ex

#配置enp0s8
vi /etc/sysconfig/network-scripts/ifcfg-enp0s8

DEVICE=enp0s8
NM_CONTROLLED=no
BOOTPROTO=none
ONBOOT=yes
NAME=enp0s8
DEVICETYPE=ovs
OVS_BRIDGE=br-ex
TYPE=OVSPort
PEERDNS=no

#配置br-ex
vi /etc/sysconfig/network-scripts/ifcfg-br-ex

DEVICE=br-ex
STP=no
BRIDGING_OPTS=priority=32768
TYPE=OVSBridge
BOOTPROTO=static
DEFROUTE=yes
PEERDNS=yes
PEERROUTES=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=no
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
IPV6_FAILURE_FATAL=no
NAME=br-ex
ONBOOT=yes
IPADDR=10.30.0.10
PREFIX=24
GATEWAY=10.30.0.1
DEVICETYPE=ovs
USERCTL=yes

#重启openvswitch
systemctl restart network.service
ovs-vsctl list-ports br-ex
ethtool -K enp0s8 gro off
systemctl restart network.service
cp /usr/lib/systemd/system/neutron-openvswitch-agent.service /usr/lib/systemd/system/neutron-openvswitch-agent.service.orig
sed -i 's,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g' /usr/lib/systemd/system/neutron-openvswitch-agent.service
systemctl enable neutron-openvswitch-agent.service neutron-l3-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-ovs-cleanup.service
systemctl start neutron-openvswitch-agent.service neutron-l3-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service

#On your Controller Node
vi /etc/sysctl.conf

net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1

sysctl -p

#安装neutron
yum install -y openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch

#配置neutron
vi /etc/neutron/neutron.conf

[DEFAULT]
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True
auth_strategy = keystone
rpc_backend=rabbit
[matchmaker_redis]
[matchmaker_ring]
[quotas]
[agent]
[keystone_authtoken]
admin_tenant_name = %SERVICE_TENANT_NAME%
admin_user = %SERVICE_USER%
admin_password = %SERVICE_PASSWORD%
auth_uri = http://10.30.0.10:5000
auth_url = http://10.30.0.10:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = neutron
password = redhat
[database]
[nova]
[oslo_concurrency]
[oslo_policy]
[oslo_messaging_amqp]
[oslo_messaging_qpid]
[oslo_messaging_rabbit]
rabbit_host = 10.30.0.10
rabbit_userid = guest
rabbit_password = guest

#配置ml2_conf
vi /etc/neutron/plugins/ml2/ml2_conf.ini

[ml2]
type_drivers = flat,gre,vxlan
tenant_network_types = vxlan
mechanism_drivers = openvswitch
[ml2_type_flat]
[ml2_type_vlan]
[ml2_type_gre]
[ml2_type_vxlan]
vni_ranges = 1:10000
[securitygroup]
enable_security_group = True
enable_ipset = True
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
[ovs]
# set the value to your local ip in tunnel network
local_ip = 192.168.0.11
[agent]
tunnel_types = vxlan

#配置ovs_neutron_plugin
vi /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini

[ovs]
integration_bridge = br-int
tunnel_bridge = br-tun
# set the value to your local ip in tunnel network
local_ip = 192.168.0.11
[agent]
tunnel_types = vxlan
[securitygroup]

#重启openvswitch
systemctl enable openvswitch.service
systemctl start openvswitch.service

#配置nova
vi /etc/nova/nova.conf

[DEFAULT]
rpc_backend=rabbit
# set to your local ip in management network
my_ip=10.30.0.11
auth_strategy=keystone
network_api_class=nova.network.neutronv2.api.API
linuxnet_interface_driver=nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver=nova.virt.firewall.NoopFirewallDriver
novncproxy_base_url=http://10.30.0.10:6080/vnc_auto.html
vncserver_listen=0.0.0.0
# set to your local ip in management network
vncserver_proxyclient_address=10.30.0.11
vnc_enabled=true
[api_database]
[barbican]
[cells]
[cinder]
[conductor]
[database]
[ephemeral_storage_encryption]
[glance]
host = 10.30.0.10
[guestfs]
[hyperv]
[image_file_url]
[ironic]
[keymgr]
[keystone_authtoken]
auth_uri = http://10.0.0.10:5000
auth_url = http://10.30.0.10:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = nova
password = redhat
[libvirt]
virt_type=qemu
[metrics]
[neutron]
url = http://10.30.0.10:9696
auth_strategy = keystone
admin_auth_url = http://10.30.0.10:35357/v2.0
admin_tenant_name = service
admin_username = neutron
admin_password = redhat
[osapi_v3]
[rdp]
[serial_console]
[spice]
[ssl]
[trusted_computing]
[upgrade_levels]
[vmware]
[workarounds]
[xenserver]
[zookeeper]
[matchmaker_redis]
[matchmaker_ring]
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[oslo_messaging_amqp]
[oslo_messaging_qpid]
[oslo_messaging_rabbit]
rabbit_host = 10.30.0.10
rabbit_userid = guest
rabbit_password = guest

#重启neutron-openvswitch-agent
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
cp /usr/lib/systemd/system/neutron-openvswitch-agent.service /usr/lib/systemd/system/neutron-openvswitch-agent.service.orig
sed -i 's,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g' /usr/lib/systemd/system/neutron-openvswitch-agent.service
systemctl restart openstack-nova-compute.service
systemctl enable neutron-openvswitch-agent.service
systemctl start neutron-openvswitch-agent.service

#<<<<<<<<<<<<<<<<<<<< Verification >>>>>>>>>>>>>>>>>>>>

#On your Controller Node
cd /root
source admin-openrc.sh
neutron ext-list

+-----------------------+-----------------------------------------------+
| alias | name |
+-----------------------+-----------------------------------------------+
| security-group | security-group |
| l3_agent_scheduler | L3 Agent Scheduler |
| net-mtu | Network MTU |
| ext-gw-mode | Neutron L3 Configurable external gateway mode |
| binding | Port Binding |
| provider | Provider Network |
| agent | agent |
| quotas | Quota management support |
| subnet_allocation | Subnet Allocation |
| dhcp_agent_scheduler | DHCP Agent Scheduler |
| l3-ha | HA Router extension |
| multi-provider | Multi Provider Network |
| external-net | Neutron external network |
| router | Neutron L3 Router |
| allowed-address-pairs | Allowed Address Pairs |
| extraroute | Neutron Extra Route |
| extra_dhcp_opt | Neutron Extra DHCP opts |
| dvr | Distributed Virtual Router |
+-----------------------+-----------------------------------------------+


#On all OpenStack Nodes
ovs-vsctl show

[root@node-1 ~(admin)]# ovs-vsctl show
4bedc59c-a685-4a53-98b8-28ab9ac3479d
Bridge br-tun
fail_mode: secure
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Port "vxlan-c0a8000c"
Interface "vxlan-c0a8000c"
type: vxlan
options: {df_default="true", in_key=flow, local_ip="192.168.0.10", out_key=flow, remote_ip="192.168.0.12"}
Port "vxlan-c0a8000b"
Interface "vxlan-c0a8000b"
type: vxlan
options: {df_default="true", in_key=flow, local_ip="192.168.0.10", out_key=flow, remote_ip="192.168.0.11"}
Port br-tun
Interface br-tun
type: internal
Bridge br-ex
Port phy-br-ex
Interface phy-br-ex
type: patch
options: {peer=int-br-ex}
Port br-ex
Interface br-ex
type: internal
Port "enp0s8"
Interface "enp0s8"
Bridge br-int
fail_mode: secure
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port int-br-ex
Interface int-br-ex
type: patch
options: {peer=phy-br-ex}
Port "qr-790ecb8b-da"
tag: 1
Interface "qr-790ecb8b-da"
type: internal
Port "qg-78c1257c-b9"
tag: 2
Interface "qg-78c1257c-b9"
type: internal
Port "tapa307ce86-ee"
tag: 1
Interface "tapa307ce86-ee"
type: internal
Port br-int
Interface br-int
type: internal
ovs_version: "2.4.0"

[root@node-2 ~]# ovs-vsctl show
73650c34-694d-49dd-8f6e-3c5996ef02a1
Bridge br-int
fail_mode: secure
Port br-int
Interface br-int
type: internal
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Bridge br-tun
fail_mode: secure
Port br-tun
Interface br-tun
type: internal
Port "vxlan-c0a8000c"
Interface "vxlan-c0a8000c"
type: vxlan
options: {df_default="true", in_key=flow, local_ip="192.168.0.11", out_key=flow, remote_ip="192.168.0.12"}
Port "vxlan-c0a8000a"
Interface "vxlan-c0a8000a"
type: vxlan
options: {df_default="true", in_key=flow, local_ip="192.168.0.11", out_key=flow, remote_ip="192.168.0.10"}
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
ovs_version: "2.4.0"

[root@node-3 ~]# ovs-vsctl show
4836f089-eeb9-4116-93a2-c84028beabbf
Bridge br-int
fail_mode: secure
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port "qvo85881db0-8d"
tag: 1
Interface "qvo85881db0-8d"
Port br-int
Interface br-int
type: internal
Bridge br-tun
fail_mode: secure
Port "vxlan-c0a8000b"
Interface "vxlan-c0a8000b"
type: vxlan
options: {df_default="true", in_key=flow, local_ip="192.168.0.12", out_key=flow, remote_ip="192.168.0.11"}
Port "vxlan-c0a8000a"
Interface "vxlan-c0a8000a"
type: vxlan
options: {df_default="true", in_key=flow, local_ip="192.168.0.12", out_key=flow, remote_ip="192.168.0.10"}
Port br-tun
Interface br-tun
type: internal
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
ovs_version: "2.4.0"


#>>>>>>>>>>>>>>>>>>>> Network Initialization

cd /root
# To create a external provider network
source admin-openrc.sh
neutron net-create public --router:external --provider:physical_network public --provider:network_type flat
neutron subnet-create public 10.30.0.0/24 --name public-subnet --allocation-pool start=10.30.0.100,end=10.30.0.200 --disable-dhcp --gateway 10.30.0.1
# To create a vxlan tenant network
source demo-openrc.sh
neutron net-create private
neutron subnet-create private 192.168.1.0/24 --name private-subnet --gateway 192.168.1.1
neutron router-create private-router
neutron router-interface-add private-router private-subnet
neutron router-gateway-set private-router public
nova keypair-add demo-key
nova boot --flavor m1.tiny --image cirros-in-fs --nic net-id=$(neutron net-list | awk '/ private / {print $2}') --security-group default --key-name demo-key instance1
nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
neutron floatingip-create public

+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| fixed_ip_address | |
| floating_ip_address | 10.30.0.101 |
| floating_network_id | 987d51e4-28f9-4861-8020-fc455916f441 |
| id | c7071675-fb84-466d-ad67-927a016f9c23 |
| port_id | |
| router_id | |
| status | DOWN |
| tenant_id | 0498aa8db8114baaa01bc0830f00dac6 |
+---------------------+--------------------------------------+

#nova floating-ip-associate instance1 $floating_ip_address
nova floating-ip-associate instance1 10.30.0.101


#>>>>>>>>>>>>>>>>>>>> Horizon Installation

#On your Controller Node

#安装组件
yum install -y openstack-dashboard httpd mod_wsgi memcached python-memcached

#配置openstack-dashboard
vi /etc/openstack-dashboard/local_settings
...
OPENSTACK_HOST = "10.30.0.10"
ALLOWED_HOSTS = '*'
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': '127.0.0.1:11211',
}
}
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"

#配置httpd.conf注释节点
vi /etc/httpd/conf/httpd.conf

#ServerName node-1


#重启服务
chown -R apache:apache /usr/share/openstack-dashboard/static
systemctl enable httpd.service memcached.service
systemctl restart httpd.service memcached.service

#<<<<<<<<<<<<<<<<<<<< Verification >>>>>>>>>>>>>>>>>>>>
http://10.30.0.10/dashboard

#>>>>>>>>>>>>>>>>>>>> Cinder Installation

#On your Controller Node

#建立cinder数据库
mysql -u root -p
CREATE DATABASE cinder;
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'redhat';
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'redhat';
EXIT

#配置cinder
cd /root
source admin-openrc.sh
#Set the password to 'redhat'
openstack user create --password-prompt cinder
openstack role add --project service --user cinder admin
openstack service create --name cinder --description "OpenStack Block Storage" volume
openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
openstack endpoint create --publicurl http://10.30.0.10:8776/v2/%\(tenant_id\)s --internalurl http://10.30.0.10:8776/v2/%\(tenant_id\)s --adminurl http://10.30.0.10:8776/v2/%\(tenant_id\)s --region RegionOne volume
openstack endpoint create --publicurl http://10.30.0.10:8776/v2/%\(tenant_id\)s --internalurl http://10.30.0.10:8776/v2/%\(tenant_id\)s --adminurl http://10.30.0.10:8776/v2/%\(tenant_id\)s --region RegionOne volumev2

#安装cinder组件
yum install -y openstack-cinder python-cinderclient python-oslo-db python-oslo-log qemu lvm2 targetcli MySQL-python
cp /usr/share/cinder/cinder-dist.conf /etc/cinder/cinder.conf
chown -R cinder:cinder /etc/cinder/cinder.conf
systemctl enable lvm2-lvmetad.service
systemctl start lvm2-lvmetad.service
pvcreate /dev/sdb
vgcreate cinder-volumes /dev/sdb

#配置lvm
vi /etc/lvm/lvm.conf

devices {
...
# To accept the /dev/sdb device, /dev/sda device, and reject all other devices
filter = [ "a/sdb/", "a/sda/", "r/.*/"]
...
}

#配置cinder
vi /etc/cinder/cinder.conf

[DEFAULT]
...
verbose = True
rpc_backend = rabbit
auth_strategy = keystone
my_ip = 10.30.0.10
enabled_backends = lvm
glance_host = 10.30.0.10
[database]
...
connection = mysql://cinder:redhat@10.30.0.10/cinder
[oslo_messaging_rabbit]
...
rabbit_host = 10.30.0.10
rabbit_userid = guest
rabbit_password = guest
[oslo_concurrency]
...
lock_path = /var/lock/cinder
[keystone_authtoken]
...
auth_uri = http://10.30.0.10:5000
auth_url = http://10.30.0.10:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = cinder
password = redhat
[lvm]
...
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = lioadm

#重启cinder服务
if [ ! -d /var/lock/cinder ]; then mkdir /var/lock/cinder; fi
chown -R cinder:cinder /var/lock/cinder
su -s /bin/sh -c "cinder-manage db sync" cinder
systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service openstack-cinder-volume.service target.service
systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service openstack-cinder-volume.service target.service

#<<<<<<<<<<<<<<<<<<<< Verification >>>>>>>>>>>>>>>>>>>>

#查看服务列表
cd /root
echo "export OS_VOLUME_API_VERSION=2" | tee -a admin-openrc.sh demo-openrc.sh
source admin-openrc.sh
cinder service-list

+------------------+------------------------+------+---------+-------+----------------------------+-----------------+
| Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+------------------+------------------------+------+---------+-------+----------------------------+-----------------+
| cinder-scheduler | node-1.example.com | nova | enabled | up | 2016-03-02T02:03:12.000000 | - |
| cinder-volume | node-1.example.com@lvm | nova | enabled | up | 2016-03-02T02:03:12.000000 | - |
+------------------+------------------------+------+---------+-------+----------------------------+-----------------+

source demo-openrc.sh
cinder list
cinder create --name test-lvm 10
cinder list

+--------------------------------------+-----------+----------+------+-------------+----------+-------------+
| ID | Status | Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+----------+------+-------------+----------+-------------+
| 59eb88b6-72f6-4e4e-b398-310f6ab028ed | available | test-lvm | 10 | - | false | |
+--------------------------------------+-----------+----------+------+-------------+----------+-------------+

#===========================================================
#OpenStack DVR Configuration
#===========================================================

#On Controller Node

cd /root
source demo-openrc.sh

#nova delete $uuid
#可以图形化界面终止实例,清除网关
nova delete 54d384b6-bca1-4fff-a2d6-a63c0b567c21
neutron router-gateway-clear private-router
neutron router-interface-delete private-router $(neutron net-show private |awk '/ subnets / { print $4 }')
neutron router-delete private-router
neutron subnet-delete private-subnet

cd /root
source admin-openrc.sh

#编辑neutron.conf
vi /etc/neutron/neutron.conf

[DEFAULT]
...
router_distributed = True

#编辑ml2_conf.ini
vi /etc/neutron/plugins/ml2/ml2_conf.ini

[ml2]
...
mechanism_drivers = openvswitch,l2population
[agent]
...
l2_population = True
enable_distributed_routing = True

#编辑l3_agent.ini
vi /etc/neutron/l3_agent.ini

[DEFAULT]
...
agent_mode = dvr_snat

#>>>>>>>>>>>>>>>>>>>> On Compute Nodes

#编辑neutron.conf
vi /etc/neutron/neutron.conf

[DEFAULT]
...
router_distributed = True

#编辑ml2_conf.ini
vi /etc/neutron/plugins/ml2/ml2_conf.ini

[ml2]
...
mechanism_drivers = openvswitch,l2population

[ovs]
...
bridge_mappings = public:br-ex

[agent]
...
l2_population = True
enable_distributed_routing = True

#编辑l3_agent.ini
vi /etc/neutron/l3_agent.ini

[DEFAULT]
...
verbose = True
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
external_network_bridge =
router_delete_namespaces = True
agent_mode = dvr

#编辑metadata_agent.ini
vi /etc/neutron/metadata_agent.ini

[DEFAULT]
...
auth_uri = http://10.30.0.10:5000
auth_url = http://10.30.0.10:35357
auth_region = RegionOne
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = neutron
password = redhatnova_metadata_ip = 10.30.0.10
metadata_proxy_shared_secret = redhat

#编辑ifcfg-enp0s8
vi /etc/sysconfig/network-scripts/ifcfg-enp0s8

DEVICE=enp0s8
NM_CONTROLLED=no
BOOTPROTO=none
ONBOOT=yes
NAME=enp0s8
DEVICETYPE=ovs
OVS_BRIDGE=br-ex
TYPE=OVSPort
PEERDNS=no

#编辑ifcfg-br-ex
vi /etc/sysconfig/network-scripts/ifcfg-br-ex


DEVICE=br-ex
STP=no
BRIDGING_OPTS=priority=32768
TYPE=OVSBridge
BOOTPROTO=static
DEFROUTE=yes
PEERDNS=yes
PEERROUTES=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=no
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
IPV6_FAILURE_FATAL=no
NAME=br-ex
ONBOOT=yes
IPADDR=10.30.0.11
PREFIX=24
GATEWAY=10.30.0.1
DEVICETYPE=ovs
USERCTL=yes

#重启openvswitch服务
ovs-vsctl add-br br-ex
systemctl restart network.service
systemctl enable neutron-l3-agent.service neutron-metadata-agent.service
systemctl start neutron-l3-agent.service neutron-metadata-agent.service
systemctl restart openvswitch.service
systemctl restart neutron-openvswitch-agent.service

#>>>>>>>>>>>>>>>>>>>> On Controller Node

#重启openvswitch服务
systemctl restart neutron-server.service
systemctl restart neutron-openvswitch-agent.service neutron-l3-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
systemctl restart openvswitch.service


#<<<<<<<<<<<<<<<<<<<< Network Initialization >>>>>>>>>>>>>>>>>>>>

source demo-openrc.sh
neutron subnet-create private 192.168.100.0/24 --name private-subnet --gateway 192.168.100.1
neutron router-create private-router
neutron router-interface-add private-router private-subnet
neutron router-gateway-set private-router public
nova boot --flavor m1.tiny --image cirros-in-fs --nic net-id=$(neutron net-list | awk '/ private / {print $2}') --security-group default --key-name demo-key instance2
neutron floatingip-create public
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| fixed_ip_address | |
| floating_ip_address | 10.30.0.103 |
| floating_network_id | 987d51e4-28f9-4861-8020-fc455916f441 |
| id | b6cff8cf-b0f5-4051-800a-9a90b7931d2e |
| port_id | |
| router_id | |
| status | DOWN |
| tenant_id | 0498aa8db8114baaa01bc0830f00dac6 |
+---------------------+--------------------------------------+

#nova floating-ip-associate instance2 $FLOATINGIP
nova floating-ip-associate instance2 10.30.0.103

Neutron SDN

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
# -*- mode: ruby -*-
# vi: set ft=ruby :

# All Vagrant configuration is done below. The "2" in Vagrant.configure
# configures the configuration version (we support older styles for
# backwards compatibility). Please don't change it unless you know what
# you're doing.

$script = <<SCRIPT
echo doing provision...
if [ ! $LANG = "en_US.UTF-8" ]; then
echo "export LC_ALL=en_US.UTF-8" >> /root/.bashrc
fi
echo -e "10.30.0.10\tnode-1.example.com\tnode-1\n10.30.0.11\tnode-2.example.com\tnode-2\n10.30.0.12\tnode-3.example.com\tnode-3" >> /etc/hosts
echo -e "192.168.0.10\tnode-1.example.com\tnode-1\n192.168.0.11\tnode-2.example.com\tnode-2\n192.168.0.12\tnode-3.example.com\tnode-3" >> /etc/hosts
rm -f /etc/yum.repos.d/*


yum install -y wget && wget -O /etc/yum.repos.d/rh-openstack-7-el7.repo http://192.168.1.100/content/cfgfile/rh-openstack-7-el7.repo
yum install -y wget && wget -O /etc/yum.repos.d/rh-rhel-7-el7.repo http://192.168.1.100/content/cfgfile/rh-rhel-7-el7.repo

echo done...
SCRIPT

Vagrant.configure(2) do |config|
config.vm.define :node1 do |node1|
node1.vm.box = "rhel-7.1"
node1.vm.provision "shell", inline: $script
node1.vm.hostname = "node-1.example.com"
node1.vm.network "forwarded_port", guest: 80, host: 18080
node1.vm.network "forwarded_port", guest: 22, host: 12222
node1.vm.provider :virtualbox do |v|
v.name = "node1.demo1"
v.memory = 2048
v.cpus = 2
end
node1.vm.network :private_network, ip: "10.30.0.10", auto_config: true
node1.vm.network :private_network, ip: "192.168.0.10", auto_config: true
end
config.vm.define :node2 do |node2|
node2.vm.box = "rhel-7.1"
node2.vm.provision "shell", inline: $script
node2.vm.hostname = "node-2.example.com"
node2.vm.provider :virtualbox do |v|
v.name = "node2.demo1"
v.memory = 1024
v.cpus = 1
end
node2.vm.network :private_network, ip: "10.30.0.11", auto_config: true
node2.vm.network :private_network, ip: "192.168.0.11", auto_config: true
end
config.vm.define :node3 do |node3|
node3.vm.box = "rhel-7.1"
node3.vm.provision "shell", inline: $script
node3.vm.hostname = "node-3.example.com"
node3.vm.provider :virtualbox do |v|
v.name = "node3.demo1"
v.memory = 1024
v.cpus = 1
end
node3.vm.network :private_network, ip: "10.30.0.12", auto_config: true
node3.vm.network :private_network, ip: "192.168.0.12", auto_config: true
end
config.vm.define :network1 do |network1|
network1.vm.box = "rhel-7.1"
network1.vm.provision "shell", inline: $script
network1.vm.hostname = "network1.example.com"
network1.vm.provider :virtualbox do |v|
v.name = "network1.demo1"
v.memory = 2048
v.cpus = 1
end
network1.vm.network :private_network, ip: "10.30.0.10", auto_config: true
network1.vm.network :private_network, ip: "172.16.10.10", auto_config: true
end
config.vm.define :compute1 do |compute1|
compute1.vm.box = "rhel-7.1"
compute1.vm.provision "shell", inline: $script
compute1.vm.hostname = "compute1.example.com"
compute1.vm.provider :virtualbox do |v|
v.name = "compute1.demo1"
v.memory = 1024
v.cpus = 1
end
compute1.vm.network :private_network, ip: "10.30.0.11", auto_config: true
compute1.vm.network :private_network, ip: "172.16.10.11", auto_config: true
end
end
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
#===========================================================
#Neutron SDN Manual Installation
#===========================================================

#>>>>>>>>>>>>>>>>>>>> 安装网络节点(network1)

#Vagrant
vagrant up network1
vagrant up compute1

#配置ifcfg-enp0s8
vi /etc/sysconfig/network-scripts/ifcfg-enp0s8

NM_CONTROLLED=no
BOOTPROTO=none
ONBOOT=yes
IPADDR=10.30.0.10
NETMASK=255.255.255.0
DEVICE=enp0s8
PEERDNS=no

#配置ifcfg-enp0s9
vi /etc/sysconfig/network-scripts/ifcfg-enp0s9

NM_CONTROLLED=no
BOOTPROTO=none
ONBOOT=yes
IPADDR=172.16.10.10
NETMASK=255.255.255.0
DEVICE=enp0s9
PEERDNS=no

#重启网络服务
systemctl restart network

#安装需要用到的包
yum install libvirt openvswitch python-virtinst xauth tigervnc qemu-kvm-rhev -y

#启动libvirt和openvswitch

systemctl enable libvirtd openvswitch
systemctl restart libvirtd openvswitch
firewall-cmd --set-default-zone=trusted
firewall-cmd --reload

#移除默认的libvirt 网络,方便清晰分析网络情况
virsh net-destroy default
virsh net-autostart --disable default
virsh net-undefine default

#设置允许ipforwarding
vi /etc/sysctl.conf

net.ipv4.ip_forward=1
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0

#立即生效
sysctl -p

#创建一个linux bridge
brctl addbr qbr01
ip link set qbr01 up

#创建一个instance,并连接到qbr01 Bridge,网络接口部分配置如下
mkdir /data/gre -p
vi /data/gre/instance1.xml

<domain type="qemu">
<uuid>9b43dbac-450c-45e0-8755-bfd485183212</uuid>
<name>instance1</name>
<memory>524288</memory>
<vcpu>1</vcpu>
<sysinfo type="smbios">
<system>
<entry name="manufacturer">Red Hat Inc.</entry>
<entry name="product">OpenStack Nova</entry>
<entry name="version">2014.1.1-3.el6</entry>
<entry name="serial">6b348fa3-ccf4-420d-b4f2-2a894357d637</entry>
<entry name="uuid">9b43dbac-450c-45e0-8755-bfd485183212</entry>
</system>
</sysinfo>
<os>
<type>hvm</type>
<boot dev="hd"/>
<smbios mode="sysinfo"/>
</os>
<features>
<acpi/>
<apic/>
</features>
<clock offset="utc"/>
<cpu mode="host-model" match="exact"/>
<devices>
<disk type="file" device="disk">
<driver name="qemu" type="qcow2" cache="none"/>
<source file="/data/gre/instance1.img"/>
<target bus="virtio" dev="vda"/>
</disk>
<interface type='bridge'>
<source bridge='qbr01'/>
<target dev='tap01'/>
<model type='virtio'/>
<driver name='qemu'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
<serial type="file">
<source path="/data/gre/instance1.log"/>
</serial>
<serial type="pty"/>
<input type="tablet" bus="usb"/>
<graphics type="vnc" autoport="yes" keymap="en-us" listen="0.0.0.0"/>
<video>
<model type="cirrus"/>
</video>
</devices>
</domain>

cd /data/gre/
wget http://192.168.1.100/content/images/vdisk/cirros-0.3.1-x86_64-disk.img
mv cirros-0.3.1-x86_64-disk.img instance1.img
virsh define instance1.xml
virsh start instance1
virsh vncdisplay instance1
vncviewer :0

#启动console 以后,登录添加ip 地址 192.168.1.10
ip addr add 192.168.1.10/24 dev eth0
route add default gw 192.168.1.1

#创建一个内部bridge br-int, 模拟 OpenStack integrated bridge
ovs-vsctl add-br br-int
ovs-vsctl add-port br-int gre0 -- set interface gre0 type=gre options:remote_ip=172.16.10.11

#创建一个veth peer,连接Linux Bridge 'qbr01' 和 OpenvSwich Bridge 'br-ini'
ip link add qvo01 type veth peer name qvb01
brctl addif qbr01 qvb01
ovs-vsctl add-port br-int qvo01
ovs-vsctl set port qvo01 tag=100
ip link set qvb01 up
ip link set qvo01 up

#查看现在network1上的 br-int
ovs-vsctl show

#>>>>>>>>>>>>>>>>>>>> 安装计算节点(compute1)

#网络接口配置
vi /etc/sysconfig/network-scripts/ifcfg-enp0s8

BOOTPROTO=none
ONBOOT=yes
IPADDR=10.30.0.11
NETMASK=255.255.255.0
DEVICE=enp0s8
PEERDNS=no

#网络接口配置
vi /etc/sysconfig/network-scripts/ifcfg-enp0s9

NM_CONTROLLED=no
BOOTPROTO=none
ONBOOT=yes
IPADDR=172.16.10.11
NETMASK=255.255.255.0
DEVICE=enp0s9
PEERDNS=no

#重启网络服务
service network restart

#安装需要用到的包
yum install libvirt openvswitch python-virtinst xauth tigervnc

#移除libvirt 默认的网络
virsh net-destroy default
virsh net-autostart --disable default
virsh net-undefine default

#设置允许ipforwarding
vi /etc/sysctl.conf

net.ipv4.ip_forward=1
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0

#立即生效
sysctl -p

#启动libvirt和openvswitch
systemctl enable libvirtd openvswitch
systemctl restart libvirtd openvswitch
firewall-cmd --set-default-zone=trusted
firewall-cmd --reload

#创建一个linux bridge
brctl addbr qbr02
ip link set qbr02 up

#创建一个vm,并连接到qbr02
mkdir -p /data/gre
vi /data/gre/instance2.xml

<domain type="qemu">
<uuid>9b43dbac-450c-45e0-8755-bfd485183212</uuid>
<name>instance2</name>
<memory>524288</memory>
<vcpu>1</vcpu>
<sysinfo type="smbios">
<system>
<entry name="manufacturer">Red Hat Inc.</entry>
<entry name="product">OpenStack Nova</entry>
<entry name="version">2015.1.1-3.el7ost</entry>
<entry name="serial">6b348fa3-ccf4-420d-b4f2-2a894357d637</entry>
<entry name="uuid">9b43dbac-450c-45e0-8755-bfd485183212</entry>
</system>
</sysinfo>
<os>
<type>hvm</type>
<boot dev="hd"/>
<smbios mode="sysinfo"/>
</os>
<features>
<acpi/>
<apic/>
</features>
<clock offset="utc"/>
<cpu mode="host-model" match="exact"/>
<devices>
<disk type="file" device="disk">
<driver name="qemu" type="qcow2" cache="none"/>
<source file="/data/gre/instance2.img"/>
<target bus="virtio" dev="vda"/>
</disk>
<interface type='bridge'>
<source bridge='qbr02'/>
<target dev='tap02'/>
<model type='virtio'/>
<driver name='qemu'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
<serial type="file">
<source path="/data/gre/instance2.log"/>
</serial>
<serial type="pty"/>
<input type="tablet" bus="usb"/>
<graphics type="vnc" autoport="yes" keymap="en-us" listen="0.0.0.0"/>
<video>
<model type="cirrus"/>
</video>
</devices>
</domain>

cd /data/gre/
wget http://192.168.1.100/content/images/vdisk/cirros-0.3.1-x86_64-disk.img
mv cirros-0.3.1-x86_64-disk.img instance2.img
virsh define instance2.xml
virsh start instance2
virsh vncdisplay instance2
vncviewer :0

#启动console 以后,登录添加ip得知 192.168.1.11
ip addr add 192.168.1.11/24 dev eth0
route add default gw 192.168.1.1

#创建一个内部bridge br-int, 模拟 OpenStack integrated bridge
ovs-vsctl add-br br-int
ovs-vsctl add-port br-int gre0 -- set interface gre0 type=gre options:remote_ip=172.16.10.10

#创建一个veth peer,连接Linux Bridge 'qbr02' 和 OpenvSwich Bridge 'br-ini'
ip link add qvo02 type veth peer name qvb02
brctl addif qbr02 qvb02
ovs-vsctl add-port br-int qvo02
ovs-vsctl set port qvo02 tag=100
ip link set qvb02 up
ip link set qvo02 up

#查看现在network1 上的 br-int
ovs-vsctl show

#检查是否能连通instance1,在instance2的控制台
ping 192.168.1.10

#通过 Network Namespace 实现租户私有网络互访
#添加一个namespace,dhcp01用于隔离租户网络。
ip netns add dhcp01

#为私有网络192.168.1.0/24 ,在命名空间dhcp01 中 创建dhcp 服务
ovs-vsctl add-port br-int tapdhcp01 -- set interface tapdhcp01 type=internal
ovs-vsctl set port tapdhcp01 tag=100
ip link set tapdhcp01 netns dhcp01
ip netns exec dhcp01 ip addr add 192.168.1.2/24 dev tapdhcp01
ip netns exec dhcp01 ip link set tapdhcp01 up

#检查网络是否连通,在namespace 访问instance1 和 instance2
ip netns exec dhcp01 ping 192.168.1.10
ip netns exec dhcp01 ping 192.168.1.11

#通过 Network Namespace 和Iptables 实现L3 router
ovs-vsctl add-br br-ex

#配置enp0s8
vi /etc/sysconfig/network-scripts/ifcfg-enp0s8

DEVICE=enp0s8
NM_CONTROLLED=no
BOOTPROTO=none
ONBOOT=yes
NAME=enp0s8
DEVICETYPE=ovs
OVS_BRIDGE=br-ex
TYPE=OVSPort
PEERDNS=no

#配置br-ex
vi /etc/sysconfig/network-scripts/ifcfg-br-ex

DEVICE=br-ex
STP=no
BRIDGING_OPTS=priority=32768
TYPE=OVSBridge
BOOTPROTO=static
DEFROUTE=yes
PEERDNS=yes
PEERROUTES=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=no
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
IPV6_FAILURE_FATAL=no
NAME=br-ex
ONBOOT=yes
IPADDR=10.30.0.10
PREFIX=24
DEVICETYPE=ovs
USERCTL=yes

#重启启动网络服务
ovs-vsctl add-port br-ex enp0s8 && service network restart

#检查网络,配置后是否连通
ping 10.30.0.10

#添加一个namespace,router01 用于路由和floating ip 分配
ip netns add router01

#在br-int添加一个接口,作为私有网络192.168.1.0/24的网关

ovs-vsctl add-port br-int qr01 -- set interface qr01 type=internal
ovs-vsctl set port qr01 tag=100
ip link set qr01 netns router01
ip netns exec router01 ip addr add 192.168.1.1/24 dev qr01
ip netns exec router01 ip link set qr01 up
ip netns exec router01 ip link set lo up

#在br-ex中添加一个接口,用于私网192.168.1.0/24设置下一跳地址
ovs-vsctl add-port br-ex qg01 -- set interface qg01 type=internal
ip link set qg01 netns router01
ip netns exec router01 ip addr add 10.30.0.100/24 dev qg01
ip netns exec router01 ip link set qg01 up
ip netns exec router01 ip link set lo up

#模拟分配floating ip 访问instance1
#为instance1 192.168.1.11 分配floating ip,172.16.0.101
ip netns exec router01 ip addr add 10.30.0.101/32 dev qg01
ip netns exec router01 iptables -t nat -A OUTPUT -d 10.30.0.101/32 -j DNAT --to-destination 192.168.1.10
ip netns exec router01 iptables -t nat -A PREROUTING -d 10.30.0.101/32 -j DNAT --to-destination 192.168.1.10
ip netns exec router01 iptables -t nat -A POSTROUTING -s 192.168.1.10/32 -j SNAT --to-source 10.30.0.101
ip netns exec router01 iptables -t nat -A POSTROUTING -s 192.168.1.0/24 -j SNAT --to-source 10.30.0.100

#测试floating ip
ping 10.30.0.101

#如果需要清除nat chain
iptables -t nat -F

OpenStack PackStack

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
# -*- mode: ruby -*-
# vi: set ft=ruby :

# All Vagrant configuration is done below. The "2" in Vagrant.configure
# configures the configuration version (we support older styles for
# backwards compatibility). Please don't change it unless you know what
# you're doing.

$script = <<SCRIPT
echo doing provision...
if [ ! $LANG = "en_US.UTF-8" ]; then
echo "export LC_ALL=en_US.UTF-8" >> /root/.bashrc
fi
echo -e "10.30.0.10\tnode-1.example.com\tnode-1\n10.30.0.11\tnode-2.example.com\tnode-2\n10.30.0.12\tnode-3.example.com\tnode-3" >> /etc/hosts
echo -e "192.168.0.10\tnode-1.example.com\tnode-1\n192.168.0.11\tnode-2.example.com\tnode-2\n192.168.0.12\tnode-3.example.com\tnode-3" >> /etc/hosts
rm -f /etc/yum.repos.d/*


yum install -y wget && wget -O /etc/yum.repos.d/rh-openstack-7-el7.repo http://192.168.1.100/content/cfgfile/rh-openstack-7-el7.repo
yum install -y wget && wget -O /etc/yum.repos.d/rh-rhel-7-el7.repo http://192.168.1.100/content/cfgfile/rh-rhel-7-el7.repo

echo done...
SCRIPT

Vagrant.configure(2) do |config|
config.vm.define :node1 do |node1|
node1.vm.box = "rhel-7.1"
node1.vm.provision "shell", inline: $script
node1.vm.hostname = "node-1.example.com"
node1.vm.network "forwarded_port", guest: 80, host: 18080
node1.vm.network "forwarded_port", guest: 22, host: 12222
node1.vm.provider :virtualbox do |v|
v.name = "node1.demo1"
v.memory = 2048
v.cpus = 2
end
node1.vm.network :private_network, ip: "10.30.0.10", auto_config: true
node1.vm.network :private_network, ip: "192.168.0.10", auto_config: true
end
config.vm.define :node2 do |node2|
node2.vm.box = "rhel-7.1"
node2.vm.provision "shell", inline: $script
node2.vm.hostname = "node-2.example.com"
node2.vm.provider :virtualbox do |v|
v.name = "node2.demo1"
v.memory = 1024
v.cpus = 1
end
node2.vm.network :private_network, ip: "10.30.0.11", auto_config: true
node2.vm.network :private_network, ip: "192.168.0.11", auto_config: true
end
config.vm.define :node3 do |node3|
node3.vm.box = "rhel-7.1"
node3.vm.provision "shell", inline: $script
node3.vm.hostname = "node-3.example.com"
node3.vm.provider :virtualbox do |v|
v.name = "node3.demo1"
v.memory = 1024
v.cpus = 1
end
node3.vm.network :private_network, ip: "10.30.0.12", auto_config: true
node3.vm.network :private_network, ip: "192.168.0.12", auto_config: true
end
config.vm.define :network1 do |network1|
network1.vm.box = "rhel-7.1"
network1.vm.provision "shell", inline: $script
network1.vm.hostname = "network1.example.com"
network1.vm.provider :virtualbox do |v|
v.name = "network1.demo1"
v.memory = 2048
v.cpus = 1
end
network1.vm.network :private_network, ip: "10.30.0.10", auto_config: true
network1.vm.network :private_network, ip: "172.16.10.10", auto_config: true
end
config.vm.define :compute1 do |compute1|
compute1.vm.box = "rhel-7.1"
compute1.vm.provision "shell", inline: $script
compute1.vm.hostname = "compute1.example.com"
compute1.vm.provider :virtualbox do |v|
v.name = "compute1.demo1"
v.memory = 1024
v.cpus = 1
end
compute1.vm.network :private_network, ip: "10.30.0.11", auto_config: true
compute1.vm.network :private_network, ip: "172.16.10.11", auto_config: true
end
end
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
#===========================================================
#OpenStack PackStack
#===========================================================

#Vagrant
vagrant up network1
vagrant up compute1

#Repository
cd /etc/yum.repos.d

wget http://192.168.1.100/content/repofiles/rh-ceph-1-el7.repo
wget http://192.168.1.100/content/repofiles/rh-rhel-7-el7.repo
wget http://192.168.1.100/content/repofiles/rh-rhelosp-7-el7.repo

yum clean all
yum repolist

yum install -y net-tools

#配置packstack,失败可以反复执行
cd -
yum install openstack-packstack -y
packstack --answer-file=packstack-answers.txt

#packstack-answers.txt
[general]

# Path to a public key to install on servers. If a usable key has not
# been installed on the remote servers, the user is prompted for a
# password and this key is installed so the password will not be
# required again.
CONFIG_SSH_KEY=/root/.ssh/id_rsa.pub

# Default password to be used everywhere (overridden by passwords set
# for individual services or users).
CONFIG_DEFAULT_PASSWORD=

# Specify 'y' to install MariaDB.
CONFIG_MARIADB_INSTALL=y

# Specify 'y' to install OpenStack Image Service (glance).
CONFIG_GLANCE_INSTALL=y

# Specify 'y' to install OpenStack Block Storage (cinder).
CONFIG_CINDER_INSTALL=y

# Specify 'y' to install OpenStack Shared File System (manila).
CONFIG_MANILA_INSTALL=n

# Specify 'y' to install OpenStack Compute (nova).
CONFIG_NOVA_INSTALL=y

# Specify 'y' to install OpenStack Networking (neutron); otherwise,
# Compute Networking (nova) will be used.
CONFIG_NEUTRON_INSTALL=y

# Specify 'y' to install OpenStack Dashboard (horizon).
CONFIG_HORIZON_INSTALL=y

# Specify 'y' to install OpenStack Object Storage (swift).
CONFIG_SWIFT_INSTALL=n

# Specify 'y' to install OpenStack Metering (ceilometer).
CONFIG_CEILOMETER_INSTALL=n

# Specify 'y' to install OpenStack Orchestration (heat).
CONFIG_HEAT_INSTALL=n

# Specify 'y' to install OpenStack Data Processing (sahara).
CONFIG_SAHARA_INSTALL=n

# Specify 'y' to install OpenStack Database (trove).
CONFIG_TROVE_INSTALL=n

# Specify 'y' to install OpenStack Bare Metal Provisioning (ironic).
CONFIG_IRONIC_INSTALL=n

# Specify 'y' to install the OpenStack Client packages (command-line
# tools). An admin "rc" file will also be installed.
CONFIG_CLIENT_INSTALL=y

# Comma-separated list of NTP servers. Leave plain if Packstack
# should not install ntpd on instances.
CONFIG_NTP_SERVERS=

# Specify 'y' to install Nagios to monitor OpenStack hosts. Nagios
# provides additional tools for monitoring the OpenStack environment.
CONFIG_NAGIOS_INSTALL=n

# Comma-separated list of servers to be excluded from the
# installation. This is helpful if you are running Packstack a second
# time with the same answer file and do not want Packstack to
# overwrite these server's configurations. Leave empty if you do not
# need to exclude any servers.
EXCLUDE_SERVERS=

# Specify 'y' if you want to run OpenStack services in debug mode;
# otherwise, specify 'n'.
CONFIG_DEBUG_MODE=n

# IP address of the server on which to install OpenStack services
# specific to the controller role (for example, API servers or
# dashboard).
CONFIG_CONTROLLER_HOST=10.30.0.10

# List of IP addresses of the servers on which to install the Compute/
# service.
CONFIG_COMPUTE_HOSTS=10.30.0.11

# List of IP addresses of the server on which to install the network
# service such as Compute networking (nova network) or OpenStack
# Networking (neutron).
CONFIG_NETWORK_HOSTS=10.30.0.10

# Specify 'y' if you want to use VMware vCenter as hypervisor and
# storage; otherwise, specify 'n'.
CONFIG_VMWARE_BACKEND=n

# Specify 'y' if you want to use unsupported parameters. This should
# be used only if you know what you are doing. Issues caused by using
# unsupported options will not be fixed before the next major release.
CONFIG_UNSUPPORTED=n

# Specify 'y' if you want to use subnet addresses (in CIDR format)
# instead of interface names in following options:
# CONFIG_NOVA_COMPUTE_PRIVIF, CONFIG_NOVA_NETWORK_PRIVIF,
# CONFIG_NOVA_NETWORK_PUBIF, CONFIG_NEUTRON_OVS_BRIDGE_IFACES,
# CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS, CONFIG_NEUTRON_OVS_TUNNEL_IF.
# This is useful for cases when interface names are not same on all
# installation hosts.
CONFIG_USE_SUBNETS=n

# IP address of the VMware vCenter server.
CONFIG_VCENTER_HOST=

# User name for VMware vCenter server authentication.
CONFIG_VCENTER_USER=

# Password for VMware vCenter server authentication.
CONFIG_VCENTER_PASSWORD=

# Name of the VMware vCenter cluster.
CONFIG_VCENTER_CLUSTER_NAME=

# (Unsupported!) IP address of the server on which to install
# OpenStack services specific to storage servers such as Image or
# Block Storage services.
CONFIG_STORAGE_HOST=10.30.0.10

# (Unsupported!) IP address of the server on which to install
# OpenStack services specific to OpenStack Data Processing (sahara).
CONFIG_SAHARA_HOST=10.30.0.10

# Specify 'y' to enable the EPEL repository (Extra Packages for
# Enterprise Linux).
CONFIG_USE_EPEL=n

# Comma-separated list of URLs for any additional yum repositories,
# to use for installation.
CONFIG_REPO=

# Specify 'y' to enable the RDO testing repository.
CONFIG_ENABLE_RDO_TESTING=n

# To subscribe each server with Red Hat Subscription Manager, include
# this with CONFIG_RH_PW.
CONFIG_RH_USER=

# To subscribe each server to receive updates from a Satellite
# server, provide the URL of the Satellite server. You must also
# provide a user name (CONFIG_SATELLITE_USERNAME) and password
# (CONFIG_SATELLITE_PASSWORD) or an access key (CONFIG_SATELLITE_AKEY)
# for authentication.
CONFIG_SATELLITE_URL=

# To subscribe each server with Red Hat Subscription Manager, include
# this with CONFIG_RH_USER.
CONFIG_RH_PW=

# Specify 'y' to enable RHEL optional repositories.
CONFIG_RH_OPTIONAL=y

# HTTP proxy to use with Red Hat Subscription Manager.
CONFIG_RH_PROXY=

# Port to use for Red Hat Subscription Manager's HTTP proxy.
CONFIG_RH_PROXY_PORT=

# User name to use for Red Hat Subscription Manager's HTTP proxy.
CONFIG_RH_PROXY_USER=

# Password to use for Red Hat Subscription Manager's HTTP proxy.
CONFIG_RH_PROXY_PW=

# User name to authenticate with the RHN Satellite server; if you
# intend to use an access key for Satellite authentication, leave this
# blank.
CONFIG_SATELLITE_USER=

# Password to authenticate with the RHN Satellite server; if you
# intend to use an access key for Satellite authentication, leave this
# blank.
CONFIG_SATELLITE_PW=

# Access key for the Satellite server; if you intend to use a user
# name and password for Satellite authentication, leave this blank.
CONFIG_SATELLITE_AKEY=

# Certificate path or URL of the certificate authority to verify that
# the connection with the Satellite server is secure. If you are not
# using Satellite in your deployment, leave this blank.
CONFIG_SATELLITE_CACERT=

# Profile name that should be used as an identifier for the system in
# RHN Satellite (if required).
CONFIG_SATELLITE_PROFILE=

# Comma-separated list of flags passed to the rhnreg_ks command
# (novirtinfo, norhnsd, nopackages).
CONFIG_SATELLITE_FLAGS=

# HTTP proxy to use when connecting to the RHN Satellite server (if
# required).
CONFIG_SATELLITE_PROXY=

# User name to authenticate with the Satellite-server HTTP proxy.
CONFIG_SATELLITE_PROXY_USER=

# User password to authenticate with the Satellite-server HTTP proxy.
CONFIG_SATELLITE_PROXY_PW=

# Specify filepath for CA cert file. If CONFIG_SSL_CACERT_SELFSIGN is
# set to 'n' it has to be preexisting file.
CONFIG_SSL_CACERT_FILE=/etc/pki/tls/certs/selfcert.crt

# Specify filepath for CA cert key file. If
# CONFIG_SSL_CACERT_SELFSIGN is set to 'n' it has to be preexisting
# file.
CONFIG_SSL_CACERT_KEY_FILE=/etc/pki/tls/private/selfkey.key

# Enter the path to use to store generated SSL certificates in.
CONFIG_SSL_CERT_DIR=~/packstackca/

# Specify 'y' if you want Packstack to pregenerate the CA
# Certificate.
CONFIG_SSL_CACERT_SELFSIGN=y

# Enter the selfsigned CAcert subject country.
CONFIG_SELFSIGN_CACERT_SUBJECT_C=--

# Enter the selfsigned CAcert subject state.
CONFIG_SELFSIGN_CACERT_SUBJECT_ST=State

# Enter the selfsigned CAcert subject location.
CONFIG_SELFSIGN_CACERT_SUBJECT_L=City

# Enter the selfsigned CAcert subject organization.
CONFIG_SELFSIGN_CACERT_SUBJECT_O=openstack

# Enter the selfsigned CAcert subject organizational unit.
CONFIG_SELFSIGN_CACERT_SUBJECT_OU=packstack

# Enter the selfsigned CAcert subject common name.
CONFIG_SELFSIGN_CACERT_SUBJECT_CN=node-1.example.com

CONFIG_SELFSIGN_CACERT_SUBJECT_MAIL=admin@node-1.example.com

# Service to be used as the AMQP broker (qpid, rabbitmq).
CONFIG_AMQP_BACKEND=rabbitmq

# IP address of the server on which to install the AMQP service.
CONFIG_AMQP_HOST=10.30.0.10

# Specify 'y' to enable SSL for the AMQP service.
CONFIG_AMQP_ENABLE_SSL=n

# Specify 'y' to enable authentication for the AMQP service.
CONFIG_AMQP_ENABLE_AUTH=n

# Password for the NSS certificate database of the AMQP service.
CONFIG_AMQP_NSS_CERTDB_PW=redhat

# User for AMQP authentication.
CONFIG_AMQP_AUTH_USER=amqp_user

# Password for AMQP authentication.
CONFIG_AMQP_AUTH_PASSWORD=redhat

# IP address of the server on which to install MariaDB. If a MariaDB
# installation was not specified in CONFIG_MARIADB_INSTALL, specify
# the IP address of an existing database server (a MariaDB cluster can
# also be specified).
CONFIG_MARIADB_HOST=10.30.0.10

# User name for the MariaDB administrative user.
CONFIG_MARIADB_USER=root

# Password for the MariaDB administrative user.
CONFIG_MARIADB_PW=redhat

# Password to use for the Identity service (keystone) to access the
# database.
CONFIG_KEYSTONE_DB_PW=redhat

# Default region name to use when creating tenants in the Identity
# service.
CONFIG_KEYSTONE_REGION=RegionOne

# Token to use for the Identity service API.
CONFIG_KEYSTONE_ADMIN_TOKEN=redhat

# Email address for the Identity service 'admin' user. Defaults to:
CONFIG_KEYSTONE_ADMIN_EMAIL=root@localhost

# User name for the Identity service 'admin' user. Defaults to:
# 'admin'.
CONFIG_KEYSTONE_ADMIN_USERNAME=admin

# Password to use for the Identity service 'admin' user.
CONFIG_KEYSTONE_ADMIN_PW=redhat

# Password to use for the Identity service 'demo' user.
CONFIG_KEYSTONE_DEMO_PW=redhat

# Identity service API version string (v2.0, v3).
CONFIG_KEYSTONE_API_VERSION=v2.0

# Identity service token format (UUID or PKI). The recommended format
# for new deployments is UUID.
CONFIG_KEYSTONE_TOKEN_FORMAT=UUID

# Name of service to use to run the Identity service (keystone,
# httpd).
CONFIG_KEYSTONE_SERVICE_NAME=httpd

# Type of Identity service backend (sql, ldap).
CONFIG_KEYSTONE_IDENTITY_BACKEND=sql

# URL for the Identity service LDAP backend.
CONFIG_KEYSTONE_LDAP_URL=ldap://10.30.0.10

# User DN for the Identity service LDAP backend. Used to bind to the
# LDAP server if the LDAP server does not allow anonymous
# authentication.
CONFIG_KEYSTONE_LDAP_USER_DN=

# User DN password for the Identity service LDAP backend.
CONFIG_KEYSTONE_LDAP_USER_PASSWORD=

# Base suffix for the Identity service LDAP backend.
CONFIG_KEYSTONE_LDAP_SUFFIX=

# Query scope for the Identity service LDAP backend. Use 'one' for
# onelevel/singleLevel or 'sub' for subtree/wholeSubtree ('base' is
# not actually used by the Identity service and is therefore
# deprecated) (base, one, sub)
CONFIG_KEYSTONE_LDAP_QUERY_SCOPE=one

# Query page size for the Identity service LDAP backend.
CONFIG_KEYSTONE_LDAP_PAGE_SIZE=-1

# User subtree for the Identity service LDAP backend.
CONFIG_KEYSTONE_LDAP_USER_SUBTREE=

# User query filter for the Identity service LDAP backend.
CONFIG_KEYSTONE_LDAP_USER_FILTER=

# User object class for the Identity service LDAP backend.
CONFIG_KEYSTONE_LDAP_USER_OBJECTCLASS=

# User ID attribute for the Identity service LDAP backend.
CONFIG_KEYSTONE_LDAP_USER_ID_ATTRIBUTE=

# User name attribute for the Identity service LDAP backend.
CONFIG_KEYSTONE_LDAP_USER_NAME_ATTRIBUTE=

# User email address attribute for the Identity service LDAP backend.
CONFIG_KEYSTONE_LDAP_USER_MAIL_ATTRIBUTE=

# User-enabled attribute for the Identity service LDAP backend.
CONFIG_KEYSTONE_LDAP_USER_ENABLED_ATTRIBUTE=

# Bit mask integer applied to user-enabled attribute for the Identity
# service LDAP backend. Indicate the bit that the enabled value is
# stored in if the LDAP server represents "enabled" as a bit on an
# integer rather than a boolean. A value of "0" indicates the mask is
# not used (default). If this is not set to "0", the typical value is
# "2", typically used when
# "CONFIG_KEYSTONE_LDAP_USER_ENABLED_ATTRIBUTE = userAccountControl".
CONFIG_KEYSTONE_LDAP_USER_ENABLED_MASK=-1

# Value of enabled attribute which indicates user is enabled for the
# Identity service LDAP backend. This should match an appropriate
# integer value if the LDAP server uses non-boolean (bitmask) values
# to indicate whether a user is enabled or disabled. If this is not
# set as 'y', the typical value is "512". This is typically used when
# "CONFIG_KEYSTONE_LDAP_USER_ENABLED_ATTRIBUTE = userAccountControl".
CONFIG_KEYSTONE_LDAP_USER_ENABLED_DEFAULT=TRUE

# Specify 'y' if users are disabled (not enabled) in the Identity
# service LDAP backend (inverts boolean-enalbed values). Some LDAP
# servers use a boolean lock attribute where "y" means an account is
# disabled. Setting this to 'y' allows these lock attributes to be
# used. This setting will have no effect if
# "CONFIG_KEYSTONE_LDAP_USER_ENABLED_MASK" is in use (n, y).
CONFIG_KEYSTONE_LDAP_USER_ENABLED_INVERT=n

# Comma-separated list of attributes stripped from LDAP user entry
# upon update.
CONFIG_KEYSTONE_LDAP_USER_ATTRIBUTE_IGNORE=

# Identity service LDAP attribute mapped to default_project_id for
# users.
CONFIG_KEYSTONE_LDAP_USER_DEFAULT_PROJECT_ID_ATTRIBUTE=

# Specify 'y' if you want to be able to create Identity service users
# through the Identity service interface; specify 'n' if you will
# create directly in the LDAP backend (n, y).
CONFIG_KEYSTONE_LDAP_USER_ALLOW_CREATE=n

# Specify 'y' if you want to be able to update Identity service users
# through the Identity service interface; specify 'n' if you will
# update directly in the LDAP backend (n, y).
CONFIG_KEYSTONE_LDAP_USER_ALLOW_UPDATE=n

# Specify 'y' if you want to be able to delete Identity service users
# through the Identity service interface; specify 'n' if you will
# delete directly in the LDAP backend (n, y).
CONFIG_KEYSTONE_LDAP_USER_ALLOW_DELETE=n

# Identity service LDAP attribute mapped to password.
CONFIG_KEYSTONE_LDAP_USER_PASS_ATTRIBUTE=

# DN of the group entry to hold enabled LDAP users when using enabled
# emulation.
CONFIG_KEYSTONE_LDAP_USER_ENABLED_EMULATION_DN=

# List of additional LDAP attributes for mapping additional attribute
# mappings for users. The attribute-mapping format is
# <ldap_attr>:<user_attr>, where ldap_attr is the attribute in the
# LDAP entry and user_attr is the Identity API attribute.
CONFIG_KEYSTONE_LDAP_USER_ADDITIONAL_ATTRIBUTE_MAPPING=

# Group subtree for the Identity service LDAP backend.
CONFIG_KEYSTONE_LDAP_GROUP_SUBTREE=

# Group query filter for the Identity service LDAP backend.
CONFIG_KEYSTONE_LDAP_GROUP_FILTER=

# Group object class for the Identity service LDAP backend.
CONFIG_KEYSTONE_LDAP_GROUP_OBJECTCLASS=

# Group ID attribute for the Identity service LDAP backend.
CONFIG_KEYSTONE_LDAP_GROUP_ID_ATTRIBUTE=

# Group name attribute for the Identity service LDAP backend.
CONFIG_KEYSTONE_LDAP_GROUP_NAME_ATTRIBUTE=

# Group member attribute for the Identity service LDAP backend.
CONFIG_KEYSTONE_LDAP_GROUP_MEMBER_ATTRIBUTE=

# Group description attribute for the Identity service LDAP backend.
CONFIG_KEYSTONE_LDAP_GROUP_DESC_ATTRIBUTE=

# Comma-separated list of attributes stripped from LDAP group entry
# upon update.
CONFIG_KEYSTONE_LDAP_GROUP_ATTRIBUTE_IGNORE=

# Specify 'y' if you want to be able to create Identity service
# groups through the Identity service interface; specify 'n' if you
# will create directly in the LDAP backend (n, y).
CONFIG_KEYSTONE_LDAP_GROUP_ALLOW_CREATE=n

# Specify 'y' if you want to be able to update Identity service
# groups through the Identity service interface; specify 'n' if you
# will update directly in the LDAP backend (n, y).
CONFIG_KEYSTONE_LDAP_GROUP_ALLOW_UPDATE=n

# Specify 'y' if you want to be able to delete Identity service
# groups through the Identity service interface; specify 'n' if you
# will delete directly in the LDAP backend (n, y).
CONFIG_KEYSTONE_LDAP_GROUP_ALLOW_DELETE=n

# List of additional LDAP attributes used for mapping additional
# attribute mappings for groups. The attribute=mapping format is
# <ldap_attr>:<group_attr>, where ldap_attr is the attribute in the
# LDAP entry and group_attr is the Identity API attribute.
CONFIG_KEYSTONE_LDAP_GROUP_ADDITIONAL_ATTRIBUTE_MAPPING=

# Specify 'y' if the Identity service LDAP backend should use TLS (n,
# y).
CONFIG_KEYSTONE_LDAP_USE_TLS=n

# CA certificate directory for Identity service LDAP backend (if TLS
# is used).
CONFIG_KEYSTONE_LDAP_TLS_CACERTDIR=

# CA certificate file for Identity service LDAP backend (if TLS is
# used).
CONFIG_KEYSTONE_LDAP_TLS_CACERTFILE=

# Certificate-checking strictness level for Identity service LDAP
# backend (never, allow, demand).
CONFIG_KEYSTONE_LDAP_TLS_REQ_CERT=demand

# Password to use for the Image service (glance) to access the
# database.
CONFIG_GLANCE_DB_PW=redhat

# Password to use for the Image service to authenticate with the
# Identity service.
CONFIG_GLANCE_KS_PW=redhat

# Storage backend for the Image service (controls how the Image
# service stores disk images). Valid options are: file or swift
# (Object Storage). The Object Storage service must be enabled to use
# it as a working backend; otherwise, Packstack falls back to 'file'.
# ['file', 'swift']
CONFIG_GLANCE_BACKEND=file

# Password to use for the Block Storage service (cinder) to access
# the database.
CONFIG_CINDER_DB_PW=redhat

# Password to use for the Block Storage service to authenticate with
# the Identity service.
CONFIG_CINDER_KS_PW=redhat

# Storage backend to use for the Block Storage service; valid options
# are: lvm, gluster, nfs, vmdk, netapp. ['lvm', 'gluster', 'nfs',
# 'vmdk', 'netapp']
CONFIG_CINDER_BACKEND=lvm

# Specify 'y' to create the Block Storage volumes group. That is,
# Packstack creates a raw disk image in /var/lib/cinder, and mounts it
# using a loopback device. This should only be used for testing on a
# proof-of-concept installation of the Block Storage service (a file-
# backed volume group is not suitable for production usage) (y, n).
CONFIG_CINDER_VOLUMES_CREATE=y

# Size of Block Storage volumes group. Actual volume size will be
# extended with 3% more space for VG metadata. Remember that the size
# of the volume group will restrict the amount of disk space that you
# can expose to Compute instances, and that the specified amount must
# be available on the device used for /var/lib/cinder.
CONFIG_CINDER_VOLUMES_SIZE=10G

# A single or comma-separated list of Red Hat Storage (gluster)
# volume shares to mount. Example: 'ip-address:/vol-name', 'domain
# :/vol-name'
CONFIG_CINDER_GLUSTER_MOUNTS=

# A single or comma-separated list of NFS exports to mount. Example:
# 'ip-address:/export-name'
CONFIG_CINDER_NFS_MOUNTS=

# Administrative user account name used to access the NetApp storage
# system or proxy server.
CONFIG_CINDER_NETAPP_LOGIN=

# Password for the NetApp administrative user account specified in
# the CONFIG_CINDER_NETAPP_LOGIN parameter.
CONFIG_CINDER_NETAPP_PASSWORD=

# Hostname (or IP address) for the NetApp storage system or proxy
# server.
CONFIG_CINDER_NETAPP_HOSTNAME=

# The TCP port to use for communication with the storage system or
# proxy. If not specified, Data ONTAP drivers will use 80 for HTTP and
# 443 for HTTPS; E-Series will use 8080 for HTTP and 8443 for HTTPS.
# Defaults to: 80.
CONFIG_CINDER_NETAPP_SERVER_PORT=80

# Storage family type used on the NetApp storage system; valid
# options are ontap_7mode for using Data ONTAP operating in 7-Mode,
# ontap_cluster for using clustered Data ONTAP, or E-Series for NetApp
# E-Series. Defaults to: ontap_cluster. ['ontap_7mode',
# 'ontap_cluster', 'eseries']
CONFIG_CINDER_NETAPP_STORAGE_FAMILY=ontap_cluster

# The transport protocol used when communicating with the NetApp
# storage system or proxy server. Valid values are http or https.
# Defaults to: 'http' ('http', 'https').
CONFIG_CINDER_NETAPP_TRANSPORT_TYPE=http

# Storage protocol to be used on the data path with the NetApp
# storage system; valid options are iscsi, fc, nfs. Defaults to: nfs
# (iscsi, fc, nfs).
CONFIG_CINDER_NETAPP_STORAGE_PROTOCOL=nfs

# Quantity to be multiplied by the requested volume size to ensure
# enough space is available on the virtual storage server (Vserver) to
# fulfill the volume creation request. Defaults to: 1.0.
CONFIG_CINDER_NETAPP_SIZE_MULTIPLIER=1.0

# Time period (in minutes) that is allowed to elapse after the image
# is last accessed, before it is deleted from the NFS image cache.
# When a cache-cleaning cycle begins, images in the cache that have
# not been accessed in the last M minutes, where M is the value of
# this parameter, are deleted from the cache to create free space on
# the NFS share. Defaults to: 720.
CONFIG_CINDER_NETAPP_EXPIRY_THRES_MINUTES=720

# If the percentage of available space for an NFS share has dropped
# below the value specified by this parameter, the NFS image cache is
# cleaned. Defaults to: 20.
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_START=20

# When the percentage of available space on an NFS share has reached
# the percentage specified by this parameter, the driver stops
# clearing files from the NFS image cache that have not been accessed
# in the last M minutes, where M is the value of the
# CONFIG_CINDER_NETAPP_EXPIRY_THRES_MINUTES parameter. Defaults to:
# 60.
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_STOP=60

# Single or comma-separated list of NetApp NFS shares for Block
# Storage to use. Format: ip-address:/export-name. Defaults to: ''.
CONFIG_CINDER_NETAPP_NFS_SHARES=

# File with the list of available NFS shares. Defaults to:
# '/etc/cinder/shares.conf'.
CONFIG_CINDER_NETAPP_NFS_SHARES_CONFIG=/etc/cinder/shares.conf

# This parameter is only utilized when the storage protocol is
# configured to use iSCSI or FC. This parameter is used to restrict
# provisioning to the specified controller volumes. Specify the value
# of this parameter to be a comma separated list of NetApp controller
# volume names to be used for provisioning. Defaults to: ''.
CONFIG_CINDER_NETAPP_VOLUME_LIST=

# The vFiler unit on which provisioning of block storage volumes will
# be done. This parameter is only used by the driver when connecting
# to an instance with a storage family of Data ONTAP operating in
# 7-Mode Only use this parameter when utilizing the MultiStore feature
# on the NetApp storage system. Defaults to: ''.
CONFIG_CINDER_NETAPP_VFILER=

# The name of the config.conf stanza for a Data ONTAP (7-mode) HA
# partner. This option is only used by the driver when connecting to
# an instance with a storage family of Data ONTAP operating in 7-Mode,
# and it is required if the storage protocol selected is FC. Defaults
# to: ''.
CONFIG_CINDER_NETAPP_PARTNER_BACKEND_NAME=

# This option specifies the virtual storage server (Vserver) name on
# the storage cluster on which provisioning of block storage volumes
# should occur. Defaults to: ''.
CONFIG_CINDER_NETAPP_VSERVER=

# Restricts provisioning to the specified controllers. Value must be
# a comma-separated list of controller hostnames or IP addresses to be
# used for provisioning. This option is only utilized when the storage
# family is configured to use E-Series. Defaults to: ''.
CONFIG_CINDER_NETAPP_CONTROLLER_IPS=

# Password for the NetApp E-Series storage array. Defaults to: ''.
CONFIG_CINDER_NETAPP_SA_PASSWORD=

# This option is used to define how the controllers in the E-Series
# storage array will work with the particular operating system on the
# hosts that are connected to it. Defaults to: 'linux_dm_mp'
CONFIG_CINDER_NETAPP_ESERIES_HOST_TYPE=linux_dm_mp

# Path to the NetApp E-Series proxy application on a proxy server.
# The value is combined with the value of the
# CONFIG_CINDER_NETAPP_TRANSPORT_TYPE, CONFIG_CINDER_NETAPP_HOSTNAME,
# and CONFIG_CINDER_NETAPP_HOSTNAME options to create the URL used by
# the driver to connect to the proxy application. Defaults to:
# '/devmgr/v2'.
CONFIG_CINDER_NETAPP_WEBSERVICE_PATH=/devmgr/v2

# Restricts provisioning to the specified storage pools. Only dynamic
# disk pools are currently supported. The value must be a comma-
# separated list of disk pool names to be used for provisioning.
# Defaults to: ''.
CONFIG_CINDER_NETAPP_STORAGE_POOLS=

# Password to use for the OpenStack File Share service (manila) to
# access the database.
CONFIG_MANILA_DB_PW=PW_PLACEHOLDER

# Password to use for the OpenStack File Share service (manila) to
# authenticate with the Identity service.
CONFIG_MANILA_KS_PW=PW_PLACEHOLDER

# Backend for the OpenStack File Share service (manila); valid
# options are: generic or netapp (generic, netapp).
CONFIG_MANILA_BACKEND=generic

# Denotes whether the driver should handle the responsibility of
# managing share servers. This must be set to false if the driver is
# to operate without managing share servers. Defaults to: 'false'
# (true, false).
CONFIG_MANILA_NETAPP_DRV_HANDLES_SHARE_SERVERS=false

# The transport protocol used when communicating with the storage
# system or proxy server. Valid values are 'http' and 'https'.
# Defaults to: 'https' (https, http).
CONFIG_MANILA_NETAPP_TRANSPORT_TYPE=https

# Administrative user account name used to access the NetApp storage
# system. Defaults to: ''.
CONFIG_MANILA_NETAPP_LOGIN=admin

# Password for the NetApp administrative user account specified in
# the CONFIG_MANILA_NETAPP_LOGIN parameter. Defaults to: ''.
CONFIG_MANILA_NETAPP_PASSWORD=

# Hostname (or IP address) for the NetApp storage system or proxy
# server. Defaults to: ''.
CONFIG_MANILA_NETAPP_SERVER_HOSTNAME=

# The storage family type used on the storage system; valid values
# are ontap_cluster for clustered Data ONTAP. Defaults to:
# 'ontap_cluster'.
CONFIG_MANILA_NETAPP_STORAGE_FAMILY=ontap_cluster

# The TCP port to use for communication with the storage system or
# proxy server. If not specified, Data ONTAP drivers will use 80 for
# HTTP and 443 for HTTPS. Defaults to: '443'.
CONFIG_MANILA_NETAPP_SERVER_PORT=443

# Pattern for searching available aggregates for NetApp provisioning.
# Defaults to: '(.*)'.
CONFIG_MANILA_NETAPP_AGGREGATE_NAME_SEARCH_PATTERN=(.*)

# Name of aggregate on which to create the NetApp root volume. This
# option only applies when the option
# CONFIG_MANILA_NETAPP_DRV_HANDLES_SHARE_SERVERS is set to True.
CONFIG_MANILA_NETAPP_ROOT_VOLUME_AGGREGATE=

# NetApp root volume name. Defaults to: 'root'.
CONFIG_MANILA_NETAPP_ROOT_VOLUME_NAME=root

# This option specifies the storage virtual machine (previously
# called a Vserver) name on the storage cluster on which provisioning
# of shared file systems should occur. This option only applies when
# the option driver_handles_share_servers is set to False. Defaults
# to: ''.
CONFIG_MANILA_NETAPP_VSERVER=

# Denotes whether the driver should handle the responsibility of
# managing share servers. This must be set to false if the driver is
# to operate without managing share servers. Defaults to: 'true'.
# ['true', 'false']
CONFIG_MANILA_GENERIC_DRV_HANDLES_SHARE_SERVERS=true

# Volume name template for Manila service. Defaults to: 'manila-
# share-%s'.
CONFIG_MANILA_GENERIC_VOLUME_NAME_TEMPLATE=manila-share-%s

# Share mount path for Manila service. Defaults to: '/shares'.
CONFIG_MANILA_GENERIC_SHARE_MOUNT_PATH=/shares

# Location of disk image for Manila service instance. Defaults to: '
CONFIG_MANILA_SERVICE_IMAGE_LOCATION=https://www.dropbox.com/s/vi5oeh10q1qkckh/ubuntu_1204_nfs_cifs.qcow2

# User in Manila service instance.
CONFIG_MANILA_SERVICE_INSTANCE_USER=ubuntu

# Password to service instance user.
CONFIG_MANILA_SERVICE_INSTANCE_PASSWORD=ubuntu

# Type of networking that the backend will use. A more detailed
# description of each option is available in the Manila docs. Defaults
# to: 'neutron'. ['neutron', 'nova-network', 'standalone']
CONFIG_MANILA_NETWORK_TYPE=neutron

# Gateway IPv4 address that should be used. Required. Defaults to:
# ''.
CONFIG_MANILA_NETWORK_STANDALONE_GATEWAY=

# Network mask that will be used. Can be either decimal like '24' or
# binary like '255.255.255.0'. Required. Defaults to: ''.
CONFIG_MANILA_NETWORK_STANDALONE_NETMASK=

# Set it if network has segmentation (VLAN, VXLAN, etc). It will be
# assigned to share-network and share drivers will be able to use this
# for network interfaces within provisioned share servers. Optional.
# Example: 1001. Defaults to: ''.
CONFIG_MANILA_NETWORK_STANDALONE_SEG_ID=

# Can be IP address, range of IP addresses or list of addresses or
# ranges. Contains addresses from IP network that are allowed to be
# used. If empty, then will be assumed that all host addresses from
# network can be used. Optional. Examples: 10.0.0.10 or
# 10.0.0.10-10.0.0.20 or
# 10.0.0.10-10.0.0.20,10.0.0.30-10.0.0.40,10.0.0.50. Defaults to: ''.
CONFIG_MANILA_NETWORK_STANDALONE_IP_RANGE=

# IP version of network. Optional. Defaults to: 4 (4, 6).
CONFIG_MANILA_NETWORK_STANDALONE_IP_VERSION=4

# Password to use for OpenStack Bare Metal Provisioning (ironic) to
# access the database.
CONFIG_IRONIC_DB_PW=PW_PLACEHOLDER

# Password to use for OpenStack Bare Metal Provisioning to
# authenticate with the Identity service.
CONFIG_IRONIC_KS_PW=PW_PLACEHOLDER

# Password to use for the Compute service (nova) to access the
# database.
CONFIG_NOVA_DB_PW=redhat

# Password to use for the Compute service to authenticate with the
# Identity service.
CONFIG_NOVA_KS_PW=redhat

# Overcommitment ratio for virtual to physical CPUs. Specify 1.0 to
# disable CPU overcommitment.
CONFIG_NOVA_SCHED_CPU_ALLOC_RATIO=16.0

# Overcommitment ratio for virtual to physical RAM. Specify 1.0 to
# disable RAM overcommitment.
CONFIG_NOVA_SCHED_RAM_ALLOC_RATIO=1.5

# Protocol used for instance migration. Valid options are: tcp and
# ssh. Note that by default, the Compute user is created with the
# /sbin/nologin shell so that the SSH protocol will not work. To make
# the SSH protocol work, you must configure the Compute user on
# compute hosts manually (tcp, ssh).
CONFIG_NOVA_COMPUTE_MIGRATE_PROTOCOL=tcp

# Manager that runs the Compute service.
CONFIG_NOVA_COMPUTE_MANAGER=nova.compute.manager.ComputeManager

# PEM encoded certificate to be used for ssl on the https server,
# leave blank if one should be generated, this certificate should not
# require a passphrase. If CONFIG_HORIZON_SSL is set to 'n' this
# parameter is ignored.
CONFIG_VNC_SSL_CERT=

# SSL keyfile corresponding to the certificate if one was entered. If
# CONFIG_HORIZON_SSL is set to 'n' this parameter is ignored.
CONFIG_VNC_SSL_KEY=

# Private interface for flat DHCP on the Compute servers.
CONFIG_NOVA_COMPUTE_PRIVIF=eth0

# Compute Network Manager. ['^nova\.network\.manager\.\w+Manager$']
CONFIG_NOVA_NETWORK_MANAGER=nova.network.manager.FlatDHCPManager

# Public interface on the Compute network server.
CONFIG_NOVA_NETWORK_PUBIF=eth0

# Private interface for flat DHCP on the Compute network server.
CONFIG_NOVA_NETWORK_PRIVIF=eth1

# IP Range for flat DHCP. ['^[\:\.\da-fA-f]+(\/\d+){0,1}$']
CONFIG_NOVA_NETWORK_FIXEDRANGE=192.168.1.0/32

# IP Range for floating IP addresses. ['^[\:\.\da-
# fA-f]+(\/\d+){0,1}$']
CONFIG_NOVA_NETWORK_FLOATRANGE=10.30.0.0/24

# Specify 'y' to automatically assign a floating IP to new instances.
# (y, n)
CONFIG_NOVA_NETWORK_AUTOASSIGNFLOATINGIP=n

# First VLAN for private networks (Compute networking).
CONFIG_NOVA_NETWORK_VLAN_START=100

# Number of networks to support (Compute networking).
CONFIG_NOVA_NETWORK_NUMBER=1

# Number of addresses in each private subnet (Compute networking).
CONFIG_NOVA_NETWORK_SIZE=255

# Password to use for OpenStack Networking (neutron) to authenticate
# with the Identity service.
CONFIG_NEUTRON_KS_PW=redhat

# The password to use for OpenStack Networking to access the
# database.
CONFIG_NEUTRON_DB_PW=redhat

# The name of the Open vSwitch bridge (or empty for linuxbridge) for
# the OpenStack Networking L3 agent to use for external traffic.
# Specify 'provider' if you intend to use a provider network to handle
# external traffic.
CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex

# Password for the OpenStack Networking metadata agent.
CONFIG_NEUTRON_METADATA_PW=redhat

# Specify 'y' to install OpenStack Networking's Load-Balancing-
# as-a-Service (LBaaS) (y, n).
CONFIG_LBAAS_INSTALL=n

# Specify 'y' to install OpenStack Networking's L3 Metering agent (y,
# n).
CONFIG_NEUTRON_METERING_AGENT_INSTALL=n

# Specify 'y' to configure OpenStack Networking's Firewall-
# as-a-Service (FWaaS) (y, n)
CONFIG_NEUTRON_FWAAS=n

# Comma-separated list of network-type driver entry points to be
# loaded from the neutron.ml2.type_drivers namespace (local, flat,
# vlan, gre, vxlan).
CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan

# Comma-separated, ordered list of network types to allocate as
# tenant networks. The 'local' value is only useful for single-box
# testing and provides no connectivity between hosts (local, vlan,
# gre, vxlan).
CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vxlan

# Comma-separated ordered list of networking mechanism driver entry
# points to be loaded from the neutron.ml2.mechanism_drivers namespace
# (logger, test, linuxbridge, openvswitch, hyperv, ncs, arista,
# cisco_nexus, mlnx, l2population).
CONFIG_NEUTRON_ML2_MECHANISM_DRIVERS=openvswitch

# Comma-separated list of physical_network names with which flat
# networks can be created. Use * to allow flat networks with arbitrary
# physical_network names.
CONFIG_NEUTRON_ML2_FLAT_NETWORKS=*

# Comma-separated list of <physical_network>:<vlan_min>:<vlan_max> or
# <physical_network> specifying physical_network names usable for VLAN
# provider and tenant networks, as well as ranges of VLAN tags on each
# available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=

# Comma-separated list of <tun_min>:<tun_max> tuples enumerating
# ranges of GRE tunnel IDs that are available for tenant-network
# allocation. A tuple must be an array with tun_max +1 - tun_min >
# 1000000.
CONFIG_NEUTRON_ML2_TUNNEL_ID_RANGES=

# Comma-separated list of addresses for VXLAN multicast group. If
# left empty, disables VXLAN from sending allocate broadcast traffic
# (disables multicast VXLAN mode). Should be a Multicast IP (v4 or v6)
# address.
CONFIG_NEUTRON_ML2_VXLAN_GROUP=

# Comma-separated list of <vni_min>:<vni_max> tuples enumerating
# ranges of VXLAN VNI IDs that are available for tenant network
# allocation. Minimum value is 0 and maximum value is 16777215.
CONFIG_NEUTRON_ML2_VNI_RANGES=10:100

# Name of the L2 agent to be used with OpenStack Networking
# (linuxbridge, openvswitch).
CONFIG_NEUTRON_L2_AGENT=openvswitch

# Comma-separated list of interface mappings for the OpenStack
# Networking linuxbridge plugin. Each tuple in the list must be in the
# format <physical_network>:<net_interface>. Example:
# physnet1:eth1,physnet2:eth2,physnet3:eth3.
CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS=

# Comma-separated list of bridge mappings for the OpenStack
# Networking Open vSwitch plugin. Each tuple in the list must be in
# the format <physical_network>:<ovs_bridge>. Example: physnet1:br-
# eth1,physnet2:br-eth2,physnet3:br-eth3
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=

# Comma-separated list of colon-separated Open vSwitch
# <bridge>:<interface> pairs. The interface will be added to the
# associated bridge. If you desire the bridge to be persistent a value
# must be added to this directive, also
# CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS must be set in order to create
# the proper port. This can be achieved from the command line by
# issuing the following command: packstack --allinone --os-neutron-
# ovs-bridge-mappings=ext-net:br-ex --os-neutron-ovs-bridge-interfaces
# =br-ex:eth0
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=

# Interface for the Open vSwitch tunnel. Packstack overrides the IP
# address used for tunnels on this hypervisor to the IP found on the
# specified interface (for example, eth1).
CONFIG_NEUTRON_OVS_TUNNEL_IF=

# VXLAN UDP port.
CONFIG_NEUTRON_OVS_VXLAN_UDP_PORT=4789

# Specify 'y' to set up Horizon communication over https (y, n).
CONFIG_HORIZON_SSL=n

# Secret key to use for Horizon Secret Encryption Key.
CONFIG_HORIZON_SECRET_KEY=redhat

# PEM-encoded certificate to be used for SSL connections on the https
# server (the certificate should not require a passphrase). To
# generate a certificate, leave blank.
CONFIG_HORIZON_SSL_CERT=

# SSL keyfile corresponding to the certificate if one was specified.
CONFIG_HORIZON_SSL_KEY=

CONFIG_HORIZON_SSL_CACERT=

# Password to use for the Object Storage service to authenticate with
# the Identity service.
CONFIG_SWIFT_KS_PW=480874f915fd49a2

# Comma-separated list of devices to use as storage device for Object
# Storage. Each entry must take the format /path/to/dev (for example,
# specifying /dev/vdb installs /dev/vdb as the Object Storage storage
# device; Packstack does not create the filesystem, you must do this
# first). If left empty, Packstack creates a loopback device for test
# setup.
CONFIG_SWIFT_STORAGES=

# Number of Object Storage storage zones; this number MUST be no
# larger than the number of configured storage devices.
CONFIG_SWIFT_STORAGE_ZONES=1

# Number of Object Storage storage replicas; this number MUST be no
# larger than the number of configured storage zones.
CONFIG_SWIFT_STORAGE_REPLICAS=1

# File system type for storage nodes (xfs, ext4).
CONFIG_SWIFT_STORAGE_FSTYPE=ext4

# Custom seed number to use for swift_hash_path_suffix in
# /etc/swift/swift.conf. If you do not provide a value, a seed number
# is automatically generated.
CONFIG_SWIFT_HASH=61da8c03b2034020

# Size of the Object Storage loopback file storage device.
CONFIG_SWIFT_STORAGE_SIZE=2G

# Password used by Orchestration service user to authenticate against
# the database.
CONFIG_HEAT_DB_PW=PW_PLACEHOLDER

# Encryption key to use for authentication in the Orchestration
# database (16, 24, or 32 chars).
CONFIG_HEAT_AUTH_ENC_KEY=0aa50ccab8254d99

# Password to use for the Orchestration service to authenticate with
# the Identity service.
CONFIG_HEAT_KS_PW=PW_PLACEHOLDER

# Specify 'y' to install the Orchestration CloudWatch API (y, n).
CONFIG_HEAT_CLOUDWATCH_INSTALL=n

# Specify 'y' to install the Orchestration CloudFormation API (y, n).
CONFIG_HEAT_CFN_INSTALL=n

# Name of the Identity domain for Orchestration.
CONFIG_HEAT_DOMAIN=heat

# Name of the Identity domain administrative user for Orchestration.
CONFIG_HEAT_DOMAIN_ADMIN=heat_admin

# Password for the Identity domain administrative user for
# Orchestration.
CONFIG_HEAT_DOMAIN_PASSWORD=PW_PLACEHOLDER

# Specify 'y' to provision for demo usage and testing (y, n).
CONFIG_PROVISION_DEMO=y

# Specify 'y' to configure the OpenStack Integration Test Suite
# (tempest) for testing. The test suite requires OpenStack Networking
# to be installed (y, n).
CONFIG_PROVISION_TEMPEST=n

# CIDR network address for the floating IP subnet.
CONFIG_PROVISION_DEMO_FLOATRANGE=172.24.4.224/28

# The name to be assigned to the demo image in Glance (default
# "cirros").
CONFIG_PROVISION_IMAGE_NAME=cirros

# A URL or local file location for an image to download and provision
# in Glance (defaults to a URL for a recent "cirros" image).
#CONFIG_PROVISION_IMAGE_URL=http://download.cirros-cloud.net/0.3.3/cirros-0.3.3-x86_64-disk.img
CONFIG_PROVISION_IMAGE_URL=http://192.168.1.100/content/images/vdisk/cirros-0.3.4-x86_64-disk.img

# Format for the demo image (default "qcow2").
CONFIG_PROVISION_IMAGE_FORMAT=qcow2

# User to use when connecting to instances booted from the demo
# image.
CONFIG_PROVISION_IMAGE_SSH_USER=cirros

# Name of the Integration Test Suite provisioning user. If you do not
# provide a user name, Tempest is configured in a standalone mode.
CONFIG_PROVISION_TEMPEST_USER=

# Password to use for the Integration Test Suite provisioning user.
CONFIG_PROVISION_TEMPEST_USER_PW=PW_PLACEHOLDER

# CIDR network address for the floating IP subnet.
CONFIG_PROVISION_TEMPEST_FLOATRANGE=172.24.4.224/28

# URI of the Integration Test Suite git repository.
CONFIG_PROVISION_TEMPEST_REPO_URI=https://github.com/openstack/tempest.git

# Revision (branch) of the Integration Test Suite git repository.
CONFIG_PROVISION_TEMPEST_REPO_REVISION=master

# Specify 'y' to configure the Open vSwitch external bridge for an
# all-in-one deployment (the L3 external bridge acts as the gateway
# for virtual machines) (y, n).
CONFIG_PROVISION_ALL_IN_ONE_OVS_BRIDGE=n

# Password to use for OpenStack Data Processing (sahara) to access
# the database.
CONFIG_SAHARA_DB_PW=PW_PLACEHOLDER

# Password to use for OpenStack Data Processing to authenticate with
# the Identity service.
CONFIG_SAHARA_KS_PW=PW_PLACEHOLDER

# Secret key for signing Telemetry service (ceilometer) messages.
CONFIG_CEILOMETER_SECRET=ee2d644e43ba41db

# Password to use for Telemetry to authenticate with the Identity
# service.
CONFIG_CEILOMETER_KS_PW=85642d5d273d4dab

# Backend driver for Telemetry's group membership coordination
# (redis, none).
CONFIG_CEILOMETER_COORDINATION_BACKEND=redis

# IP address of the server on which to install MongoDB.
CONFIG_MONGODB_HOST=10.30.0.10

# IP address of the server on which to install the Redis master
# server.
CONFIG_REDIS_MASTER_HOST=10.30.0.10

# Port on which the Redis server(s) listens.
CONFIG_REDIS_PORT=6379

# Specify 'y' to have Redis try to use HA (y, n).
CONFIG_REDIS_HA=n

# Hosts on which to install Redis slaves.
CONFIG_REDIS_SLAVE_HOSTS=

# Hosts on which to install Redis sentinel servers.
CONFIG_REDIS_SENTINEL_HOSTS=

# Host to configure as the Redis coordination sentinel.
CONFIG_REDIS_SENTINEL_CONTACT_HOST=

# Port on which Redis sentinel servers listen.
CONFIG_REDIS_SENTINEL_PORT=26379

# Quorum value for Redis sentinel servers.
CONFIG_REDIS_SENTINEL_QUORUM=2

# Name of the master server watched by the Redis sentinel (eg.
# master).
CONFIG_REDIS_MASTER_NAME=mymaster

# Password to use for OpenStack Database-as-a-Service (trove) to
# access the database.
CONFIG_TROVE_DB_PW=PW_PLACEHOLDER

# Password to use for OpenStack Database-as-a-Service to authenticate
# with the Identity service.
CONFIG_TROVE_KS_PW=PW_PLACEHOLDER

# User name to use when OpenStack Database-as-a-Service connects to
# the Compute service.
CONFIG_TROVE_NOVA_USER=trove

# Tenant to use when OpenStack Database-as-a-Service connects to the
# Compute service.
CONFIG_TROVE_NOVA_TENANT=services

# Password to use when OpenStack Database-as-a-Service connects to
# the Compute service.
CONFIG_TROVE_NOVA_PW=PW_PLACEHOLDER

# Password of the nagiosadmin user on the Nagios server.
CONFIG_NAGIOS_PW=7ba4e6253b3742c1

OpenStack测试

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
#===========================================================
#Preinstallation
#===========================================================

#使用豆瓣python源
mkdir /root/.pip
cat > /root/.pip/pip.conf <<EOF
[global]
index-url = http://pypi.douban.com/simple/
[install]
trusted-host = pypi.douban.com
EOF

easy_install pip
#pbr need >=1.6
pip list | grep pbr
pip uninstall pbr
pip install pbr

#===========================================================
#Tempest Installation
#===========================================================

#>>>>>>>>>>>>>>>>>>>> Tempest Installation
yum install git gcc libxslt-devel openssl-devel libffi-devel python-devel python-pip python-virtualenv -y
# git clone https://github.com/openstack/tempest.git
# pip install tempest/

cd /root/pip-1.5.5
python setup.py install

pip install tempest


[root@node-1 ~(admin)]# glance image-list
+--------------------------------------+--------------+
| ID | Name |
+--------------------------------------+--------------+
| 6bbafd22-6791-4a9d-8240-355a64f8a5c1 | cirros-in-fs |
+--------------------------------------+--------------+

[root@node-1 ~(admin)]# neutron net-list
+--------------------------------------+---------+-------------------------------------------------------+
| id | name | subnets |
+--------------------------------------+---------+-------------------------------------------------------+
| 9f2a7cc6-6495-4105-a797-6f99d943f406 | private | c4ebe773-1685-445b-b4f3-1c71004da4de 192.168.100.0/24 |
| 987d51e4-28f9-4861-8020-fc455916f441 | public | cbb688ab-61ce-4b44-92fc-7b3d0ffee5e7 10.30.0.0/24 |
+--------------------------------------+---------+-------------------------------------------------------+

[root@node-1 ~(admin)]# openstack user list
+----------------------------------+---------+
| ID | Name |
+----------------------------------+---------+
| 3a90eefdd4af4f64b2b55e5787c8b480 | nova |
| 81fda8223f424eacb358b9ecf30d8514 | demo |
| afc0c39b498044a6b012ca85e1a82e3e | cinder |
| c66ab63dac7142438383f880ceae77e3 | neutron |
| de0c3f0303394b3e8d127119721e7f6b | admin |
| e6a433779b8749d689e4f8d47026028c | glance |
+----------------------------------+---------+


#编辑tempest
vi /etc/tempest.conf

[DEFAULT]
log_file = tempest.log
[alarming]
[auth]
tempest_roles = _member_ = user
create_isolated_networks = false
admin_username = admin
admin_tenant_name = admin
admin_password = redhat
[baremetal]
driver_enabled = false
[compute]
image_ref = 6bbafd22-6791-4a9d-8240-355a64f8a5c1
image_ref_alt = 6bbafd22-6791-4a9d-8240-355a64f8a5c1
flavor_ref = 1
flavor_ref_alt = 2
fixed_network_name = private
region = RegionOne
shelved_offload_time = 0
min_compute_nodes = 2
[compute-feature-enabled]
console_output = false
shelve = false
rescue = false
personality = false
preserve_ports = true
attach_encrypted_volume = false
[dashboard]
dashboard_url = http://10.30.0.10
login_url = http://10.30.0.10/dashboard/auth/login
[data-processing]
[data-processing-feature-enabled]
[database]
[debug]
[identity]
catalog_type = identity
uri = http://10.30.0.10:5000/v2.0/
auth_version = v2
region = RegionOne
v2_admin_endpoint_type = adminURL
v2_public_endpoint_type = publicURL
username = admin
tenant_name = admin
admin_role = admin
password = redhat
[identity-feature-enabled]
api_v3 = false
[image]
catalog_type = image
region = RegionOne
endpoint_type = publicURL
[image-feature-enabled]
api_v2 = true
api_v1 = false
[input-scenario]
[messaging]
[negative]
[network]
catalog_type = network
region = RegionOne
endpoint_type = publicURL
tenant_network_cidr = 192.168.100.0/24
tenant_network_mask_bits = 28
tenant_networks_reachable = false
public_network_id = 987d51e4-28f9-4861-8020-fc455916f441
floating_network_name = public
dns_servers = 8.8.8.8,8.8.4.4
default_network = 192.168.100.0/24
[network-feature-enabled]
[object-storage]
[object-storage-feature-enabled]
[orchestration]
[oslo_concurrency]
disable_process_locking = false
lock_path = /var/lib/nova/tmp
[scenario]
img_dir = /var/lib/glance/images/cirros-0.3.1-x86_64-uec
img_file = cirros-0.3.1-x86_64-disk.img
img_disk_format = qcow2
img_container_format = bare
[service_available]
cinder = false
neutron = true
glance = true
swift = false
nova = true
heat = false
ceilometer = false
aodh = false
horizon = true
sahara = false
ironic = false
trove = false
zaqar = false
[stress]
[telemetry]
[telemetry-feature-enabled]
[validation]
security_group = true
security_group_rules = true
connect_method = floating
auth_method = keypair
image_ssh_user = cirros
image_ssh_password = "cubswin:)"
floating_ip_range = 10.30.0.0/24
network_for_ssh = public
[volume]
build_interval = 3
catalog_type = volume
region = RegionOne
endpoint_type = publicURL
backend1_name = lvm
backend2_name = ceph
backend_names = lvm,ceph
[volume-feature-enabled]
multi_backend = true
backup = true
api_v1 = true
api_v2 = true

#验证
verify-tempest-config
./run_tempest.sh
nosetests tempest/api/identity/admin/v2/test_services.py --with-xunit --xunit-file=/root/keystone_test_services.xml --with-xunit --xunit-file
nosetests tempest.api.compute.flavors.test_flavors.py:FlavorsTestJSON
nosetests tempest.api.compute.flavors.test_flavors.py:FlavorsTestJSON.test_list_flavors

#===========================================================
#Rally Installation
#===========================================================

git clone https://github.com/stackforge/rally.git && cd rally
./install_rally.sh -v
rally-manage db recreate
source admin-openrc.sh
rally deployment create --fromenv --name=existing
rally deployment check
cp samples/tasks/scenarios/keystone/create-and-delete-user.json .
cat create-and-delete-user.json
rally task start create-and-delete-user.json

#编辑页面
vi /usr/lib/python2.7/site-packages/rally/ui/templates/task/report.html

<script type="text/javascript" src="http://192.168.1.100/content/ajax/libs/angularjs/1.3.3/angular.1.3.3.min.js"></script>

rally task report --out=report1.html --open
cp samples/tasks/scenarios/glance/create-and-delete-image.json .
rally task start create-and-delete-image.json
cp samples/tasks/scenarios/nova/boot-and-delete.json .
rally task start boot-and-delete.json
cp samples/tasks/scenarios/cinder/create-volume.json .
rally task start create-volume.json
cp samples/tasks/scenarios/vm/boot-runcommand-delete.json .
rally task start boot-runcommand-delete.json
rally verify start --set identity

OpenStack开发

  1. Horizon开发预备知识
  2. 简单案例
  3. 配置和权限
  4. Horizon换新装
  5. Horizon换交互
  6. Horizon添加新模块
  7. Horizon现有模块库举例
  8. Horizon组件详解
  9. Python库和JS库

Horizon定制

  1. Python
  2. Django
  3. jQuery
  4. OpenStack API
  5. OpenStack组件开发
文章目录
  1. 1. 前言
  2. 2. 更新记录
  3. 3. 参考资料
  4. 4. 实验环境
  5. 5. OpenStack 组件
  6. 6. OpenStack
  7. 7. Neutron SDN
  8. 8. OpenStack PackStack
  9. 9. OpenStack测试
  10. 10. OpenStack开发