本文基于Mitaka版本neutron进行源码分析。
neutron插件机制很灵活,因此使用了Stevedore库来动态加载各种插件,可以参考这篇博文:A Look at two Python Plugin Managers: Stevedore and Pike,或者官方文档(博文中有链接)来了解Stevedore库的原理和用法。
/etc/neutron/neutron.conf
[DEFAULT] core_plugin = ml2
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
neutron.manager.py: ...... CORE_PLUGINS_NAMESPACE = 'neutron.core_plugins' ...... class NeutronManager(object): ...... def __init__(self, options=None, config_file=None): # If no options have been provided, create an empty dict if not options: options = {} msg = validate_pre_plugin_load() ### core_plugin是必配项 if msg: LOG.critical(msg) raise Exception(msg) ...... plugin_provider = cfg.CONF.core_plugin LOG.info(_LI("Loading core plugin: %s"), plugin_provider) ### 通过stevedore.driver.DriverManager加载ml2插件 self.plugin = self._get_plugin_instance(CORE_PLUGINS_NAMESPACE, plugin_provider) |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 |
neutron.common.utils.py: def load_class_by_alias_or_classname(namespace, name): """Load class using stevedore alias or the class name ### 注意这里 :param namespace: namespace where the alias is defined :param name: alias or class name of the class to be loaded :returns class if calls can be loaded :raises ImportError if class cannot be loaded """ if not name: LOG.error(_LE("Alias or class name is not set")) raise ImportError(_("Class not found.")) try: # Try to resolve class by alias ### namespace是预先在setup.cfg里定义好的 mgr = driver.DriverManager(namespace, name) class_to_load = mgr.driver except RuntimeError: e1_info = sys.exc_info() # Fallback to class name ### 没定义的话返回直接import方式加载class try: class_to_load = importutils.import_class(name) except (ImportError, ValueError): LOG.error(_LE("Error loading class by alias"), exc_info=e1_info) LOG.error(_LE("Error loading class by class name"), exc_info=True) raise ImportError(_("Class not found.")) return class_to_load |
1 2 3 4 |
[entry_points] ...... neutron.core_plugins = ml2 = neutron.plugins.ml2.plugin:Ml2Plugin ###当前只实现了ml2一种core plugin |
ml2是neutron的核心插件,所有neutron内置API请求都会被它处理(内置是与extension API相对而言的,主要包含network、subnet、port等资源的操作API,而像lbaas、vpn、fwaas等资源的操作API则需要借助相关extension来完成),因此必须配置,ml2更多的是作为一个参考实现便于其它extension的编写和提交,也可以起到规范其它extension的功能实现的用途。ml2 plugin的部分代码分析可以参考:neutron安全组相关代码分析
[DEFAULT] service_plugins
= neutron.services.l3_router.l3_router_plugin.L3RouterPlugin, neutron.services.metering.metering_plugin.MeteringPlugin, neutron.services.qos.qos_plugin.QoSPlugin,neutron_lbaas.services.loadbalancer.plugin.LoadBalancerPluginv2
1 2 3 4 5 6 7 8 9 10 11 |
neutron.manager.NeutronManager: class NeutronManager(object): ...... def __init__(self, options=None, config_file=None): ...... # core plugin as a part of plugin collection simplifies # checking extensions # TODO(enikanorov): make core plugin the same as # the rest of service plugins self.service_plugins = {constants.CORE: self.plugin} self._load_service_plugins() |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 |
neutron.manager.NeutronManager#_load_service_plugins: def _load_service_plugins(self): """Loads service plugins. Starts from the core plugin and checks if it supports advanced services then loads classes provided in configuration. """ # load services from the core plugin first self._load_services_from_core_plugin() #### 什么也没加载 plugin_providers = cfg.CONF.service_plugins ### 扩充默认的service,有很多预先定义好的:neutron/plugins/common/constants.py:42 plugin_providers.extend(self._get_default_service_plugins()) LOG.debug("Loading service plugins: %s", plugin_providers) for provider in plugin_providers: ### 逐个加载 if provider == '': continue LOG.info(_LI("Loading Plugin: %s"), provider) ### 实例化,加载过程与上面的core plugin相同,先从entry points里面找 plugin_inst = self._get_plugin_instance('neutron.service_plugins', provider) # only one implementation of svc_type allowed # specifying more than one plugin # for the same type is a fatal exception if plugin_inst.get_plugin_type() in self.service_plugins: raise ValueError(_("Multiple plugins for service " "%s were configured") % plugin_inst.get_plugin_type()) self.service_plugins[plugin_inst.get_plugin_type()] = plugin_inst # search for possible agent notifiers declared in service plugin # (needed by agent management extension) ### 更新agent notifier,用来发送变更通知给agent,比如L3 plugin,用户创建router ### 会通过notifier通知L3 agent,以便执行一些agent端的流程 if (hasattr(self.plugin, 'agent_notifiers') and hasattr(plugin_inst, 'agent_notifiers')): self.plugin.agent_notifiers.update(plugin_inst.agent_notifiers) ### 打开debug日志,可以看到所有成功加载的plugin LOG.debug("Successfully loaded %(type)s plugin. " "Description: %(desc)s", {"type": plugin_inst.get_plugin_type(), "desc": plugin_inst.get_plugin_description()}) |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
[entry_points] ...... neutron.service_plugins = dummy = neutron.tests.unit.dummy_plugin:DummyServicePlugin router = neutron.services.l3_router.l3_router_plugin:L3RouterPlugin firewall = neutron_fwaas.services.firewall.fwaas_plugin:FirewallPlugin lbaas = neutron_lbaas.services.loadbalancer.plugin:LoadBalancerPlugin vpnaas = neutron_vpnaas.services.vpn.plugin:VPNDriverPlugin metering = neutron.services.metering.metering_plugin:MeteringPlugin neutron.services.firewall.fwaas_plugin.FirewallPlugin = neutron_fwaas.services.firewall.fwaas_plugin:FirewallPlugin neutron.services.loadbalancer.plugin.LoadBalancerPlugin = neutron_lbaas.services.loadbalancer.plugin:LoadBalancerPlugin neutron.services.vpn.plugin.VPNDriverPlugin = neutron_vpnaas.services.vpn.plugin:VPNDriverPlugin qos = neutron.services.qos.qos_plugin:QoSPlugin bgp = neutron.services.bgp.bgp_plugin:BgpPlugin tag = neutron.services.tag.tag_plugin:TagPlugin flavors = neutron.services.flavors.flavors_plugin:FlavorsPlugin auto_allocate = neutron.services.auto_allocate.plugin:Plugin network_ip_availability = neutron.services.network_ip_availability.plugin:NetworkIPAvailabilityPlugin timestamp_core = neutron.services.timestamp.timestamp_plugin:TimeStampPlugin |
service plugin主要是用来支持除core plugin之外的其他资源的操作API的,neutron核心资源上面已经讲过,就包含很少的几个(network、subnet、port等),其他资源如qos、router、floatingips、fwaas、lbaas、vpnaas等的操作API都需要通过相应的plugin来支持,有部分老版本的plugin的实现是在neutron项目中(services目录下),而新版本的资源一般都是在各自独立的项目中,如neutron_lbaas、neutron_fwaas、neutron_vpnaas,减少与neutron主项目的耦合,便于各自独立快速的发展更新。
至于各个plugin怎么注册到WSGI的controller,关联到HTTP请求的method和URL的,这部分得分析neutron-server的启动流程,大致看了下代码,应该是在neutron.pecan_wsgi.startup.initialize_all这个方法里面注册的,这个方法的前半部分注册core plugin的resources(network、port等)的controller,后半部分注册extensions的resources(router、floatingip等)的controller。总体来说都是在neutron.api.v2.base.Controller这个入口处根据HTTP请求的method和URL转发到各个plugin的。
service plugin需要根据部署环境的需要来配置,比如采用VLAN模式则不需要router这种plugin,如果不需要防火墙,则不需要配置firewall plugin,其他也是类似。不启用某项配置,就表示该部署环境不支持相关的neutron扩展API,neutron API文档参考:https://developer.openstack.org/api-ref/network/index.html
[DEFAULT]notification相关
dhcp_agent_notification = True
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
dhcp_agent_notification、notify_nova_on_port_data_changes
dhcp_agent_notification是在发送HTTP请求给neutron-server之后,通知dhcp agent网络信息变更的,而notify_nova_on_port_data_changes则是在port信息变更后通知nova的。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 |
neutron.pecan_wsgi.hooks.notifier.NotifierHook#after: def after(self, state): # if the after hook is executed the request completed successfully and # therefore notifications must be sent resource_name = state.request.context.get('resource') collection_name = state.request.context.get('collection') neutron_context = state.request.context.get('neutron_context') if not resource_name: LOG.debug("Skipping NotifierHook processing as there was no " "resource associated with the request") return action = pecan_constants.ACTION_MAP.get(state.request.method) if not action or action == 'get': LOG.debug("No notification will be sent for action: %s", action) return if action == 'delete': # The object has been deleted, so we must notify the agent with the # data of the original object data = {collection_name: state.request.context.get('original_resources', [])} else: try: data = jsonutils.loads(state.response.body) except ValueError: if not state.response.body: data = {} resources = [] if data: if resource_name in data: resources = [data[resource_name]] elif collection_name in data: # This was a bulk request resources = data[collection_name] # Send a notification only if a resource can be identified in the # response. This means that for operations such as add_router_interface # no notification will be sent if cfg.CONF.dhcp_agent_notification and data: self._notify_dhcp_agent( neutron_context, resource_name, action, resources) if cfg.CONF.notify_nova_on_port_data_changes: orig = {} if action == 'update': orig = state.request.context.get('original_resources')[0] elif action == 'delete': # NOTE(kevinbenton): the nova notifier is a bit strange because # it expects the original to be in the last argument on a # delete rather than in the 'original_obj' position resources = ( state.request.context.get('original_resources') or []) for resource in resources: self._nova_notify(action, resource_name, orig, {resource_name: resource}) event = '%s.%s.end' % (resource_name, action) if action == 'delete': if state.response.status_int > 300: # don't notify when unsuccessful # NOTE(kevinbenton): we may want to be more strict with the # response codes return resource_id = state.request.context.get('resource_id') payload = {resource_name + '_id': resource_id} elif action in ('create', 'update'): if not resources: # create/update did not complete so no notification return if len(resources) > 1: payload = {collection_name: resources} else: payload = {resource_name: resources[0]} else: return self._notifier.info(neutron_context, event, payload) |
这两个流程和用途比较类似,上面的after方法是pecan WSGI框架的hook实现,hook是在pecan的WSGI app初始化的时候配置的,after是在正常返回响应给请求方之前也就是请求处理完成后被调用的hook,before则是WSGI收到请求但还没有转发给controller之前被调用,关于pecan hook机制介绍,可以参考其文档:https://pecan.readthedocs.io/en/latest/hooks.html
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 |
neutron.pecan_wsgi.app.setup_app: def setup_app(*args, **kwargs): config = { 'server': { 'port': CONF.bind_port, 'host': CONF.bind_host }, 'app': { 'root': 'neutron.pecan_wsgi.controllers.root.RootController', 'modules': ['neutron.pecan_wsgi'], } #TODO(kevinbenton): error templates } pecan_config = pecan.configuration.conf_from_dict(config) app_hooks = [ hooks.ExceptionTranslationHook(), # priority 100 hooks.ContextHook(), # priority 95 hooks.BodyValidationHook(), # priority 120 hooks.OwnershipValidationHook(), # priority 125 hooks.QuotaEnforcementHook(), # priority 130 hooks.NotifierHook(), # priority 135 ##### 这个hook hooks.PolicyHook(), # priority 140 ] app = pecan.make_app( pecan_config.app.root, debug=False, wrap_app=_wrap_app, force_canonical=False, hooks=app_hooks, guess_content_type_from_ext=True ) startup.initialize_all() return app |
notify_nova_on_port_status_changes
这个是注册在数据库的port表变更事件上的notify回调开关,port表有插入或更新或status字段变动就会发送通知给nova,也就是新建port、删除port、或者更新port状态都会发送通知。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
neutron.db.db_base_plugin_v2.NeutronDbPluginV2#__init__: def __init__(self): self.set_ipam_backend() if cfg.CONF.notify_nova_on_port_status_changes: # NOTE(arosen) These event listeners are here to hook into when # port status changes and notify nova about their change. self.nova_notifier = nova_notifier.Notifier() event.listen(models_v2.Port, 'after_insert', self.nova_notifier.send_port_status) event.listen(models_v2.Port, 'after_update', self.nova_notifier.send_port_status) event.listen(models_v2.Port.status, 'set', self.nova_notifier.record_port_status_changed) for e in (events.BEFORE_CREATE, events.BEFORE_UPDATE, events.BEFORE_DELETE): registry.subscribe(self.validate_network_rbac_policy_change, rbac_mixin.RBAC_POLICY, e) |
上述几个notification,并不是通过MQ发送RPC消息给nova的,而是通过novaclient发送HTTP请求给nova服务,也就是说neutron不会通过MQ跟nova交互,仍然保持RestFul HTTP API来交互,这部分功能是新增的,之前的版本只有nova调用neutron API,但nova不知道neutron那边的port信息变更,有时候port都被删除或者状态更新了,nova却不知道,导致两边状态不一致,产生了很多的问题,因此加入了这个反向通知流程,具体提交的commit是:https://review.openstack.org/#/c/75253/。
neutron.conf的配置项很多,但其他配置项的意义和用途都比较明确,尤其是区分section之后就更加清晰易懂了,因此不多做解析。接下来分析ml2 plugin的配置项。
/etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]type_drivers = vlan
[ml2]tenant_network_types = vlan
[ml2_type_vlan]network_vlan_ranges = physnet1:1:4094
以简单的VLAN模式为例,上面两个配置项需要搭配使用。
tenant_network_types这个配置项可以参考:http://www.aboutyun.com/thread-16476-1-1.html,还有这篇:https://www.bbsmax.com/A/xl563Kk0dr/,VLAN模式的网络都是物理网络,必须在物理网络设备上被网络管理员提前配置好,neutron中的网络也要云平台的管理员根据物理网络配置信息来进行相应配置,如果配置的VLAN段在物理网络上没有配置好,那么neutron中的网络肯定是不可用的。普通租户创建网络不能指定VLAN ID,neutron会从network_vlan_ranges配置项里面挑选,而管理员则可以任意指定VLAN ID进行neutron网络的创建。
network_vlan_ranges配置项作用验证:
创建wp project,wp user,把wp user加入到wp project下,role为__member__,最后用openstack role assignment list命令查看wp user+wp project是否为非admin role。修改neutron ml2 plugin配置文件:/etc/neutron/plugins/ml2/ml2_conf.ini:network_vlan_ranges = physnet1:3456:3457,确保只有2个VLAN ID可用,最后重启neutron-server服务。
加载wp user、wp project认证配置之后分别执行如下命令:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 |
[root@vs-controller ~(wp)]$ neutron net-create wp-test-net Created a new network: +-------------------------+--------------------------------------+ | Field | Value | +-------------------------+--------------------------------------+ | admin_state_up | True | | availability_zone_hints | | | availability_zones | | | created_at | 2017-12-09T02:45:30 | | description | | | id | ec814501-d5b6-48a6-8722-3177cb4b0beb | | ipv4_address_scope | | | ipv6_address_scope | | | mtu | 1500 | | name | wp-test-net | | port_security_enabled | True | | qos_policy_id | | | router:external | False | | shared | False | | status | ACTIVE | | subnets | | | tags | | | tenant_id | 97d8756ea7fd4c8293072ebac6eb9e62 | | updated_at | 2017-12-09T02:45:30 | +-------------------------+--------------------------------------+ [root@vs-controller ~(wp)]$ neutron net-create wp-test-net2 Created a new network: +-------------------------+--------------------------------------+ | Field | Value | +-------------------------+--------------------------------------+ | admin_state_up | True | | availability_zone_hints | | | availability_zones | | | created_at | 2017-12-09T02:46:21 | | description | | | id | 1e70f8dd-a828-4ddf-a5f0-e5b488d747b5 | | ipv4_address_scope | | | ipv6_address_scope | | | mtu | 1500 | | name | wp-test-net2 | | port_security_enabled | True | | qos_policy_id | | | router:external | False | | shared | False | | status | ACTIVE | | subnets | | | tags | | | tenant_id | 97d8756ea7fd4c8293072ebac6eb9e62 | | updated_at | 2017-12-09T02:46:21 | +-------------------------+--------------------------------------+ [root@vs-controller ~(wp)]$ neutron net-create wp-test-net3 Unable to create the network. No tenant network is available for allocation. Neutron server returns request_ids: ['req-c008e9e7-8141-44e2-92af-c962dc5eb5dd'] ### create_network权限配置,第2行到最后一行的参数都是admin_only [root@vs-controller ~(wp)]$ grep create_network /etc/neutron/policy.json "create_network": "", "create_network:shared": "rule:admin_only", "create_network:router:external": "rule:admin_only", "create_network:is_default": "rule:admin_only", "create_network:segments": "rule:admin_only", "create_network:provider:network_type": "rule:admin_only", "create_network:provider:physical_network": "rule:admin_only", "create_network:provider:segmentation_id": "rule:admin_only", "create_network_profile": "rule:admin_only", |
切换到admin用户之后执行:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 |
[root@vs-controller ~(admin)]$ neutron net-show ec814501-d5b6-48a6-8722-3177cb4b0beb +---------------------------+--------------------------------------+ | Field | Value | +---------------------------+--------------------------------------+ | admin_state_up | True | | availability_zone_hints | | | availability_zones | | | created_at | 2017-12-09T02:45:30 | | description | | | id | ec814501-d5b6-48a6-8722-3177cb4b0beb | | ipv4_address_scope | | | ipv6_address_scope | | | mtu | 1500 | | name | wp-test-net | | port_security_enabled | True | | provider:network_type | vlan | | provider:physical_network | physnet1 | | provider:segmentation_id | 3457 ### wp用户创建的network的 VLAN ID | | qos_policy_id | | | router:external | False | | shared | False | | status | ACTIVE | | subnets | | | tags | | | tenant_id | 97d8756ea7fd4c8293072ebac6eb9e62 | | updated_at | 2017-12-09T02:45:30 | +---------------------------+--------------------------------------+ [root@vs-controller ~(admin)]$ neutron net-show 1e70f8dd-a828-4ddf-a5f0-e5b488d747b5 +---------------------------+--------------------------------------+ | Field | Value | +---------------------------+--------------------------------------+ | admin_state_up | True | | availability_zone_hints | | | availability_zones | | | created_at | 2017-12-09T02:46:21 | | description | | | id | 1e70f8dd-a828-4ddf-a5f0-e5b488d747b5 | | ipv4_address_scope | | | ipv6_address_scope | | | mtu | 1500 | | name | wp-test-net2 | | port_security_enabled | True | | provider:network_type | vlan | | provider:physical_network | physnet1 | | provider:segmentation_id | 3456 ### wp用户创建的network的 VLAN ID | | qos_policy_id | | | router:external | False | | shared | False | | status | ACTIVE | | subnets | | | tags | | | tenant_id | 97d8756ea7fd4c8293072ebac6eb9e62 | | updated_at | 2017-12-09T02:46:21 | +---------------------------+--------------------------------------+ ### 如果不指定provider字段,默认参数不能创建,因为配置的默认的VLAN段已经用完 [root@vs-controller ~(admin)]$ neutron net-create wp-test-net3 Unable to create the network. No tenant network is available for allocation. Neutron server returns request_ids: ['req-f7b6f8b3-c2c9-4e40-8619-383f4b8ffe21'] [root@vs-controller ~(admin)]$ neutron net-create wp-test-net3 --provider:segmentation_id 3455 Invalid input for operation: network_type required. Neutron server returns request_ids: ['req-b055a398-5085-4611-988a-ee168b0e3937'] [root@vs-controller ~(admin)]$ neutron net-create wp-test-net3 --provider:segmentation_id 3455 --provider:network_type vlan Invalid input for operation: segmentation_id requires physical_network for VLAN provider network. Neutron server returns request_ids: ['req-41199bbb-e78f-4ba7-bdb5-78e33ac76758'] ### 全部指定后,可以用配置项之外的VLAN ID创建network [root@vs-controller ~(admin)]$ neutron net-create wp-test-net3 --provider:segmentation_id 3455 --provider:network_type vlan --provider:physical_network physnet1 Created a new network: +---------------------------+--------------------------------------+ | Field | Value | +---------------------------+--------------------------------------+ | admin_state_up | True | | availability_zone_hints | | | availability_zones | | | created_at | 2017-12-09T02:48:38 | | description | | | id | ab139d32-2d94-4648-9cf7-4919154a6a84 | | ipv4_address_scope | | | ipv6_address_scope | | | mtu | 1500 | | name | wp-test-net3 | | port_security_enabled | True | | provider:network_type | vlan | | provider:physical_network | physnet1 | | provider:segmentation_id | 3455 ### admin 创建network指定的的 VLAN ID | | qos_policy_id | | | router:external | False | | shared | False | | status | ACTIVE | | subnets | | | tags | | | tenant_id | 0179389d3fae429bb9ef89d3f6e9529c | | updated_at | 2017-12-09T02:48:38 | +---------------------------+--------------------------------------+ |
provider相关参数在vxlan type driver下不需要,具体可以参考neutron官方api文档。
type_drivers是配置neutron提供的网络类型(VLAN、VXLAN…),下面会讲到的mechanism_drivers则是配置实现该网络类型所采用的网络组件(openvswitch、Linux bridge…),二者并不是全部可以互相搭配使用的,官方文档给出了可用的搭配组合:https://docs.openstack.org/newton/networking-guide/config-ml2.html#ml2-driver-support-matrix
1 2 3 4 5 6 7 |
neutron.ml2.type_drivers = flat = neutron.plugins.ml2.drivers.type_flat:FlatTypeDriver local = neutron.plugins.ml2.drivers.type_local:LocalTypeDriver vlan = neutron.plugins.ml2.drivers.type_vlan:VlanTypeDriver geneve = neutron.plugins.ml2.drivers.type_geneve:GeneveTypeDriver gre = neutron.plugins.ml2.drivers.type_gre:GreTypeDriver vxlan = neutron.plugins.ml2.drivers.type_vxlan:VxlanTypeDriver |
跟前面几个配置项类似,都是一样的在setup.cfg里定义了entry points,然后通过stevedore.named.NamedExtensionManager来加载相应的driver:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 |
neutron.plugins.ml2.plugin.Ml2Plugin#__init__: class Ml2Plugin(db_base_plugin_v2.NeutronDbPluginV2, dvr_mac_db.DVRDbMixin, external_net_db.External_net_db_mixin, sg_db_rpc.SecurityGroupServerRpcMixin, agentschedulers_db.AZDhcpAgentSchedulerDbMixin, addr_pair_db.AllowedAddressPairsMixin, vlantransparent_db.Vlantransparent_db_mixin, extradhcpopt_db.ExtraDhcpOptMixin, netmtu_db.Netmtu_db_mixin, address_scope_db.AddressScopeDbMixin): ...... @property def supported_extension_aliases(self): if not hasattr(self, '_aliases'): aliases = self._supported_extension_aliases[:] aliases += self.extension_manager.extension_aliases() sg_rpc.disable_security_group_extension_by_config(aliases) vlantransparent.disable_extension_by_config(aliases) self._aliases = aliases return self._aliases @resource_registry.tracked_resources( network=models_v2.Network, port=models_v2.Port, subnet=models_v2.Subnet, subnetpool=models_v2.SubnetPool, security_group=securitygroups_db.SecurityGroup, security_group_rule=securitygroups_db.SecurityGroupRule) def __init__(self): # First load drivers, then initialize DB, then initialize drivers self.type_manager = managers.TypeManager() ### 加载type drivers self.extension_manager = managers.ExtensionManager() ### 加载extension drivers self.mechanism_manager = managers.MechanismManager() ### 加载mechanism drivers super(Ml2Plugin, self).__init__() self.type_manager.initialize() ### 初始化type drivers self.extension_manager.initialize() ### 初始化extension drivers self.mechanism_manager.initialize() ### 初始化mechanism drivers self._setup_dhcp() self._start_rpc_notifiers() self.add_agent_status_check(self.agent_health_check) self._verify_service_plugins_requirements() LOG.info(_LI("Modular L2 Plugin initialization complete")) |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 |
neutron.plugins.ml2.managers.TypeManager: class TypeManager(stevedore.named.NamedExtensionManager): """Manage network segment types using drivers.""" def __init__(self): # Mapping from type name to DriverManager self.drivers = {} LOG.info(_LI("Configured type driver names: %s"), cfg.CONF.ml2.type_drivers) ### 通过基类stevedore.named.NamedExtensionManager加载type drivers, ### 从'neutron.ml2.type_drivers'这个命名空间在setup.cfg的entry points找相关driver的路径 ### 参考:https://docs.openstack.org/stevedore/latest/reference/index.html#namedextensionmanager super(TypeManager, self).__init__('neutron.ml2.type_drivers', cfg.CONF.ml2.type_drivers, invoke_on_load=True) LOG.info(_LI("Loaded type driver names: %s"), self.names()) self._register_types() self._check_tenant_network_types(cfg.CONF.ml2.tenant_network_types) self._check_external_network_type(cfg.CONF.ml2.external_network_type) def _register_types(self): for ext in self: network_type = ext.obj.get_type() if network_type in self.drivers: LOG.error(_LE("Type driver '%(new_driver)s' ignored because" " type driver '%(old_driver)s' is already" " registered for type '%(type)s'"), {'new_driver': ext.name, 'old_driver': self.drivers[network_type].name, 'type': network_type}) else: self.drivers[network_type] = ext LOG.info(_LI("Registered types: %s"), self.drivers.keys()) def _check_tenant_network_types(self, types): self.tenant_network_types = [] for network_type in types: if network_type in self.drivers: self.tenant_network_types.append(network_type) else: LOG.error(_LE("No type driver for tenant network_type: %s. " "Service terminated!"), network_type) raise SystemExit(1) LOG.info(_LI("Tenant network_types: %s"), self.tenant_network_types) def _check_external_network_type(self, ext_network_type): if ext_network_type and ext_network_type not in self.drivers: LOG.error(_LE("No type driver for external network_type: %s. " "Service terminated!"), ext_network_type) raise SystemExit(1) |
[ml2]
mechanism_drivers = openvswitch
extension_drivers = port_security, qos
mechanism_drivers可以做到让不同计算节点上的虚拟端口使用不同的底层物理网络类型,比如部分节点配置了SRIOV直通卡,可以配置上SRIOV的driver,不仅如此,它还能做到让同一个节点上的不同虚拟端口使用不同的底层物理网络,比如部分端口使用SRIOV,其他端口使用openvswitch。这也是neutron插件化灵活性的体现。
mechanism_drivers支持的类型有(定义在setup.cfg的[entry_points]):
1 2 3 4 5 6 7 8 9 |
neutron.ml2.mechanism_drivers = logger = neutron.tests.unit.plugins.ml2.drivers.mechanism_logger:LoggerMechanismDriver ### 本地调试用 test = neutron.tests.unit.plugins.ml2.drivers.mechanism_test:TestMechanismDriver ### 单元测试用 linuxbridge = neutron.plugins.ml2.drivers.linuxbridge.mech_driver.mech_linuxbridge:LinuxbridgeMechanismDriver macvtap = neutron.plugins.ml2.drivers.macvtap.mech_driver.mech_macvtap:MacvtapMechanismDriver openvswitch = neutron.plugins.ml2.drivers.openvswitch.mech_driver.mech_openvswitch:OpenvswitchMechanismDriver l2population = neutron.plugins.ml2.drivers.l2pop.mech_driver:L2populationMechanismDriver sriovnicswitch = neutron.plugins.ml2.drivers.mech_sriov.mech_driver.mech_driver:SriovNicSwitchMechanismDriver fake_agent = neutron.tests.unit.plugins.ml2.drivers.mech_fake_agent:FakeAgentMechanismDriver ### 单元测试用 |
实际可用的只有linuxbridge、macvtap、openvswitch、l2population、sriovnicswitch,上面提到过不同mechanism支持的type是不一样的,支持的type最全面的就是openvswitch了,也是最常用的一个。
mechanism driver主要在跟端口操作关联比较紧密,跟网络、子网关系不大,因为网络、子网操作主要是元数据的增删改,不涉及底层虚拟网络拓扑结构的变更,而端口操作如绑定到虚拟机或从虚拟机上解绑,则涉及到虚拟机所在物理机上虚拟端口(一般是tap设备)的创建、删除操作,这就要依赖openvswitch或者Linuxbridge来完成了,一般具体执行这个操作的服务都是对应物理机上的L2 agent如neutron-openvswitch-agent,毕竟这需要跟物理机上的ovs-vswitchd进程打交道。
想要了解mechanism driver的用途可以分析下update_port的流程,大致流程如下:
neutron.plugins.ml2.plugin.Ml2Plugin#update_port –> neutron.plugins.ml2.plugin.Ml2Plugin#_bind_port_if_needed –> neutron.plugins.ml2.plugin.Ml2Plugin#_attempt_binding –> neutron.plugins.ml2.plugin.Ml2Plugin#_bind_port –> neutron.plugins.ml2.managers.MechanismManager#bind_port –> neutron.plugins.ml2.managers.MechanismManager#_bind_port_level:
1 2 3 4 5 6 7 |
for driver in self.ordered_mech_drivers: if not self._check_driver_to_bind(driver, segments_to_bind, context._binding_levels): continue try: context._prepare_to_bind(segments_to_bind) driver.obj.bind_port(context) |
就算是这个流程,其实对于openvswitch mechanism driver来说,也是什么操作都没做,从该driver具体的实现来看neutron.plugins.ml2.drivers.openvswitch.mech_driver.mech_openvswitch.OpenvswitchMechanismDriver,其实现的方法很少,也就是说主要的操作,都是在L2 agent里面执行的。
而关于neutron-openvswitch-agent的处理update_port相关操作的流程,大致为(可部分参考之前写的nova添加安全组到云主机流程):
首先是注册MQ的rpc监听线程和相关回调,比如接收server端发送的port update、delete信息,并保存到OVSNeutronAgent的两个set中:
1 2 3 4 |
# Stores port update notifications for processing in main rpc loop self.updated_ports = set() # Stores port delete notifications self.deleted_ports = set() |
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent.main –> neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent.OVSNeutronAgent#__init__ –> neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent.OVSNeutronAgent#setup_rpc
之后是通过daemon_loop、rpc_loop轮询set是否有变动,并进行相应处理:
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent.OVSNeutronAgent#daemon_loop –> neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent.OVSNeutronAgent#rpc_loop –> neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent.OVSNeutronAgent#_port_info_has_changes(检查port信息是否变更,update、add、remove这三种,满足条件之后调用) –> neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent.OVSNeutronAgent#process_network_ports –> …
type_drivers和mechanism_drivers的用途(Vlan type+ovs mechanism配置下,以create_network操作为例):
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 |
neutron.plugins.ml2.plugin.Ml2Plugin: def _create_network_db(self, context, network): net_data = network[attributes.NETWORK] tenant_id = net_data['tenant_id'] session = context.session with session.begin(subtransactions=True): self._ensure_default_security_group(context, tenant_id) ### 组装network基本字段,不包含extension和provider字段 net_db = self.create_network_db(context, network) result = self._make_network_dict(net_db, process_extensions=False, context=context) ### 通知extension drivers执行创建网络相关的操作,并补充相关字段如port_security self.extension_manager.process_create_network(context, net_data, result) ### vlan模式不涉及L3操作 self._process_l3_create(context, result, net_data) net_data['id'] = result['id'] ### 通知type driver去执行相关操作 self.type_manager.create_network_segments(context, net_data, tenant_id) self.type_manager.extend_network_dict_provider(context, result) # Update the transparent vlan if configured if utils.is_extension_supported(self, 'vlan-transparent'): vlt = vlantransparent.get_vlan_transparent(net_data) net_db['vlan_transparent'] = vlt result['vlan_transparent'] = vlt mech_context = driver_context.NetworkContext(self, context, result) ### mechanism driver准备干活,vlan+ovs下network创建过程跟它无关 self.mechanism_manager.create_network_precommit(mech_context) if net_data.get(api.MTU, 0) > 0: net_db[api.MTU] = net_data[api.MTU] result[api.MTU] = net_data[api.MTU] if az_ext.AZ_HINTS in net_data: self.validate_availability_zones(context, 'network', net_data[az_ext.AZ_HINTS]) az_hints = az_ext.convert_az_list_to_string( net_data[az_ext.AZ_HINTS]) net_db[az_ext.AZ_HINTS] = az_hints result[az_ext.AZ_HINTS] = az_hints self._apply_dict_extend_functions('networks', result, net_db) return result, mech_context def create_network(self, context, network): result, mech_context = self._create_network_db(context, network) try: ### vlan+ovs下创建network跟mechanism driver无关 self.mechanism_manager.create_network_postcommit(mech_context) except ml2_exc.MechanismDriverError: with excutils.save_and_reraise_exception(): LOG.error(_LE("mechanism_manager.create_network_postcommit " "failed, deleting network '%s'"), result['id']) self.delete_network(context, result['id']) return result |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 |
neutron.plugins.ml2.managers.TypeManager: def create_network_segments(self, context, network, tenant_id): """Call type drivers to create network segments.""" ### 管理员指定了provider参数情况,校验下是否正确 segments = self._process_provider_create(network) session = context.session mtu = [] with session.begin(subtransactions=True): network_id = network['id'] if segments: ### 管理员指定了segmentation_id就尝试用指定的 for segment_index, segment in enumerate(segments): segment = self.reserve_provider_segment( session, segment) self._add_network_segment(session, network_id, segment, mtu, segment_index) elif (cfg.CONF.ml2.external_network_type and self._get_attribute(network, external_net.EXTERNAL)): segment = self._allocate_ext_net_segment(session) self._add_network_segment(session, network_id, segment, mtu) else: ### 没指定provider,需要自动分配 segment = self._allocate_tenant_net_segment(session) self._add_network_segment(session, network_id, segment, mtu) network[api.MTU] = min(mtu) if mtu else 0 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
neutron.plugins.ml2.managers.TypeManager: def _allocate_segment(self, session, network_type): ### 我们只配置了一个tenant network type:vlan,找到vlan对应的type driver对象 driver = self.drivers.get(network_type) ### 进行segment分配 return driver.obj.allocate_tenant_segment(session) def _allocate_tenant_net_segment(self, session): ### 遍历所有配置的tenant_network_type,能分配到就返回 for network_type in self.tenant_network_types: segment = self._allocate_segment(session, network_type) if segment: return segment raise exc.NoNetworkAvailable() |
1 2 3 4 5 6 7 8 9 |
neutron.plugins.ml2.drivers.type_vlan.VlanTypeDriver: def allocate_tenant_segment(self, session): alloc = self.allocate_partially_specified_segment(session) if not alloc: return return {api.NETWORK_TYPE: p_const.TYPE_VLAN, api.PHYSICAL_NETWORK: alloc.physical_network, api.SEGMENTATION_ID: alloc.vlan_id, api.MTU: self.get_mtu(alloc.physical_network)} |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 |
neutron.plugins.ml2.drivers.helpers.SegmentTypeDriver: def allocate_partially_specified_segment(self, session, **filters): """Allocate model segment from pool partially specified by filters. Return allocated db object or None. """ ### 根据type driver类型来查询不同的数据库表进行segment分配,VLAN模式下是ml2_vlan_allocations表 network_type = self.get_type() with session.begin(subtransactions=True): select = (session.query(self.model). filter_by(allocated=False, **filters)) # Selected segment can be allocated before update by someone else, allocs = select.limit(IDPOOL_SELECT_SIZE).all() if not allocs: # No resource available return ### 在所有未分配的VLAN ID记录里随机挑一个 alloc = random.choice(allocs) raw_segment = dict((k, alloc[k]) for k in self.primary_keys) LOG.debug("%(type)s segment allocate from pool " "started with %(segment)s ", {"type": network_type, "segment": raw_segment}) count = (session.query(self.model). filter_by(allocated=False, **raw_segment). update({"allocated": True})) if count: ### 分配成功就返回 LOG.debug("%(type)s segment allocate from pool " "success with %(segment)s ", {"type": network_type, "segment": raw_segment}) return alloc # Segment allocated since select LOG.debug("Allocate %(type)s segment from pool " "failed with segment %(segment)s", {"type": network_type, "segment": raw_segment}) # saving real exception in case we exceeded amount of attempts raise db_exc.RetryRequest( exc.NoNetworkFoundInMaximumAllowedAttempts()) |
管理员指定provider信息情况,尝试按指定的进行分配:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 |
neutron.plugins.ml2.drivers.type_vlan.VlanTypeDriver: def reserve_provider_segment(self, session, segment): filters = {} physical_network = segment.get(api.PHYSICAL_NETWORK) if physical_network is not None: filters['physical_network'] = physical_network vlan_id = segment.get(api.SEGMENTATION_ID) if vlan_id is not None: filters['vlan_id'] = vlan_id if self.is_partial_segment(segment): ### 相比自动分配,只是过滤参数多了一些而已(filters) alloc = self.allocate_partially_specified_segment( session, **filters) if not alloc: raise exc.NoNetworkAvailable() else: alloc = self.allocate_fully_specified_segment( session, **filters) if not alloc: raise exc.VlanIdInUse(**filters) return {api.NETWORK_TYPE: p_const.TYPE_VLAN, api.PHYSICAL_NETWORK: alloc.physical_network, api.SEGMENTATION_ID: alloc.vlan_id, api.MTU: self.get_mtu(alloc.physical_network)} |
/etc/neutron/plugins/ml2/openvswitch_agent.ini
[ovs]bridge_mappings = physnet1:br-eth7
这个配置项主要是设置provider网络对应的物理网络设备(根据mechanism不同一般是ovs bridge或者Linux bridge),这个bridge需要事先手工建好,然后还要把对应的物理网卡设备加入到bridge上,而br-int则不需要手工管理,br-int和虚拟机业务出口provider网络例如phynet1的br-eth7是通过patch port连接起来的。
这里只配置了一个phynet1使用的物理网络,其实可以配置多个provider网络,中间用“,”分开即可。
VLAN模式下该配置项用途解析:
首先是启动neutron-openvswitch-agent时检查配置项指定的bridge是否已创建,以及相关初始化操作:
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent.OVSNeutronAgent#__init__ –> neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent.OVSNeutronAgent#setup_physical_bridges
其次是之后的rpc_loop过程中,发现port有变更时,进行ovs上该bridge相关流表信息的更新(比如ovs local vlan id 到物理vlan id的转换关系的维护等):
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent.OVSNeutronAgent#daemon_loop –> neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent.OVSNeutronAgent#rpc_loop –> neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent.OVSNeutronAgent#_port_info_has_changes(检查port信息是否变更,update、add、remove这三种,满足条件之后调用) –> neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent.OVSNeutronAgent#process_network_ports –> neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent.OVSNeutronAgent#treat_devices_added_or_updated –> neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent.OVSNeutronAgent#treat_vif_port –> neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent.OVSNeutronAgent#port_bound –> neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent.OVSNeutronAgent#provision_local_vlan –> neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent.OVSNeutronAgent#_local_vlan_for_vlan –> neutron.plugins.ml2.drivers.openvswitch.agent.openflow.ovs_ofctl.br_phys.OVSPhysicalBridge#provision_local_vlan(添加流表)
结语
这篇和前面几篇neutron相关的文章,都是控制面API流程分析,这部分相对来说都是比较简单的,接下来我会继续尝试分析neutron的核心,东西向及南北向数据流,研究下在不同网络模式下数据包是怎么在虚拟机之间(同租户、跨租户)、虚拟机到物理机、虚拟机到外网等方向进行流转的。