nova-cinder交互流程分析




nova-cinder交互流程分析

原文地址:http://aspirer2004.blog.163.com/blog/static/106764720134755131463/

本文主要调研cinder与nova的交互流程,分析了自有块存储系统与nova的整合问题。

1.    Nova现有API统计

nova已经支持的块设备API可以参考http://api.openstack.org/api-ref.html中Volume Attachments,Volume Extension to Compute两个部分的说明。

操作类(所有删除操作都是异步的,需要用户自行调用查询API进行确认):

  • 创建块设备(包括从快照恢复出块设备)(可以指定块设备AZ)(需要提供用户ID)
  • 删除块设备(需要提供用户ID和块设备ID)
  • 挂载块设备(需要指定用户ID,云主机ID,块设备ID)
  • 卸载块设备(需要指定用户ID,云主机ID,块设备ID)
  • 给块设备建快照(需要提供用户ID和块设备ID)
  • 删除快照(需要提供用户ID和快照ID)

查询类:

  • 列出云主机上挂载的块设备(需要指定用户ID和云主机ID)
  • 根据云主机ID及挂载在其上的块设备ID查询挂载详细信息(需要指定用户ID,云主机ID,块设备ID)
  • 查询用户所有的块设备(需要提供用户ID)
  • 根据块设备ID查询用户某个块设备的详细信息(需要提供用户ID和块设备ID)
  • 查询用户所有的块设备快照(需要提供用户ID)
  • 查询用户所有的块设备快照详细信息(需要提供用户ID和快照ID)

 

需要新增API:

  • 扩容API(我们这边有新增API的经验,比较容易实现)

2.    Nova-Cinder交互流程分析

这里只选择两个比较典型的交互过程进行分析。

2.1     创建块设备cinder流程

创建块设备支持从快照恢复出块设备。

API URL:POST http://localhost:8774/v1.1/{tenant_id}/os-volumes

Request parameters

Parameter        Description

tenant_id                   The unique identifier of the tenant or account.

volume_id         The unique identifier for a volume.

Volume              A partial representation of a volume that is used to create a volume.

Create Volume Request: JSON

{

“volume”: {

“display_name”: “vol-001”,

“display_description”: “Another volume.”,

“size”: 30,

“volume_type”: “289da7f8-6440-407c-9fb4-7db01ec49164”,

“metadata”: {“contents”: “junk”},

“availability_zone”: “us-east1”

}

}

 

Create Volume Response: JSON

{

“volume”: {

“id”: “521752a6-acf6-4b2d-bc7a-119f9148cd8c”,

“display_name”: “vol-001”,

“display_description”: “Another volume.”,

“size”: 30,

“volume_type”: “289da7f8-6440-407c-9fb4-7db01ec49164”,

“metadata”: {“contents”: “junk”},

“availability_zone”: “us-east1”,

“snapshot_id”: null,

“attachments”: [],

“created_at”: “2012-02-14T20:53:07Z”

}

}

 

 

# nova\api\openstack\compute\contrib\volumes.py:

VolumeController.create()

@wsgi.serializers(xml=VolumeTemplate)

@wsgi.deserializers(xml=CreateDeserializer)

def create(self, req, body):

“””Creates a new volume.”””

context = req.environ[‘nova.context’]

authorize(context)

 

if not self.is_valid_body(body, ‘volume’):

raise exc.HTTPUnprocessableEntity()

 

vol = body[‘volume’]

# 卷类型,暂时不支持,参数不传入即可

vol_type = vol.get(‘volume_type’, None)

if vol_type:

try:

vol_type = volume_types.get_volume_type_by_name(context,

vol_type)

except exception.NotFound:

raise exc.HTTPNotFound()

 

metadata = vol.get(‘metadata’, None)

# 如果要从快照恢复卷,传入要被恢复的快照ID即可

snapshot_id = vol.get(‘snapshot_id’)

 

if snapshot_id is not None:

# 从快照恢复云硬盘需要实现如下方法,self.volume_api下面会有说明

snapshot = self.volume_api.get_snapshot(context, snapshot_id)

else:

snapshot = None

 

size = vol.get(‘size’, None)

if size is None and snapshot is not None:

size = snapshot[‘volume_size’]

 

LOG.audit(_(“Create volume of %s GB”), size, context=context)

# 卷AZ信息

availability_zone = vol.get(‘availability_zone’, None)

# 云硬盘需要实现如下方法,self.volume_api下面会有说明

new_volume = self.volume_api.create(context,

size,

vol.get(‘display_name’),

vol.get(‘display_description’),

snapshot=snapshot,

volume_type=vol_type,

metadata=metadata,

availability_zone=availability_zone

)

 

# TODO(vish): Instance should be None at db layer instead of

#             trying to lazy load, but for now we turn it into

#             a dict to avoid an error.

retval = _translate_volume_detail_view(context, dict(new_volume))

result = {‘volume’: retval}

 

location = ‘%s/%s’ % (req.url, new_volume[‘id’])

 

return wsgi.ResponseObject(result, headers=dict(location=location))

 

# self.volume_api说明

self.volume_api = volume.API()

volume是from nova import volume导入的

# nova\volume\__init__.py:

def API():

importutils = nova.openstack.common.importutils

cls = importutils.import_class(nova.flags.FLAGS.volume_api_class)

return cls()

可见self.volume_api调用的所有方法都是由配置项volume_api_class决定的,默认配置是使用nova-volume的API封装类,

cfg.StrOpt(‘volume_api_class’,

default=’nova.volume.api.API’,

help=’The full class name of the volume API class to use’),

也可以改用cinder的API封装类,只要把配置改为volume_api_class=nova.volume.cinder.API即可,cinder API封装类通过调用封装了创建卷方法的cinder_client库来调用到cinder的API,云硬盘可以实现一个类似的client库,也可以直接调用已有的API来实现相同的动作(cinder_client库也是对cinder API调用的封装),云硬盘可以参考nova\volume\cinder.py开发自己的API封装类,供NVS使用,由于API已经开发完成,所以只是封装API,工作量应该不是很大,需要注意的应该是认证问题。

快照相关操作及查询与上述流程没有区别,只要模仿nova\volume\cinder.py即可实现。

 

 

2.2     挂载块设备cinder流程

API URL:POST http://localhost:8774/v2/{tenant_id}/servers/{server_id}/os-volume_attachments

 

Request parameters

Parameter        Description

tenant_id                   The ID for the tenant or account in a multi-tenancy cloud.

server_id          The UUID for the server of interest to you.

volumeId          ID of the volume to attach.

device                Name of the device e.g. /dev/vdb. Use “auto” for autoassign (if supported).

volumeAttachment          A dictionary representation of a volume attachment.

Attach Volume to Server Request: JSON

{

‘volumeAttachment’: {

‘volumeId’: volume_id,

‘device’: device

}

}

 

Attach Volume to Server Response: JSON

{

“volumeAttachment”: {

“device”: “/dev/vdd”,

“serverId”: “fd783058-0e27-48b0-b102-a6b4d4057cac”,

“id”: “5f800cf0-324f-4234-bc6b-e12d5816e962”,

“volumeId”: “5f800cf0-324f-4234-bc6b-e12d5816e962”

}

}

需要注意的是这个API返回是同步的,但挂载卷到虚拟机是异步的。

# nova\api\openstack\compute\contrib\volumes.py:

VolumeAttachmentController.create()

@wsgi.serializers(xml=VolumeAttachmentTemplate)

def create(self, req, server_id, body):

“””Attach a volume to an instance.”””

context = req.environ[‘nova.context’]

authorize(context)

 

if not self.is_valid_body(body, ‘volumeAttachment’):

raise exc.HTTPUnprocessableEntity()

 

volume_id = body[‘volumeAttachment’][‘volumeId’]

device = body[‘volumeAttachment’].get(‘device’)

 

msg = _(“Attach volume %(volume_id)s to instance %(server_id)s”

” at %(device)s”) % locals()

LOG.audit(msg, context=context)

 

try:

instance = self.compute_api.get(context, server_id)

# nova-compute负责挂载卷到虚拟机

device = self.compute_api.attach_volume(context, instance,

volume_id, device)

except exception.NotFound:

raise exc.HTTPNotFound()

 

# The attach is async

attachment = {}

attachment[‘id’] = volume_id

attachment[‘serverId’] = server_id

attachment[‘volumeId’] = volume_id

attachment[‘device’] = device

 

# NOTE(justinsb): And now, we have a problem…

# The attach is async, so there’s a window in which we don’t see

# the attachment (until the attachment completes).  We could also

# get problems with concurrent requests.  I think we need an

# attachment state, and to write to the DB here, but that’s a bigger

# change.

# For now, we’ll probably have to rely on libraries being smart

 

# TODO(justinsb): How do I return “accepted” here?

return {‘volumeAttachment’: attachment}

 

# nova\compute\api.py:API.attach_volume()

@wrap_check_policy

@check_instance_lock

def attach_volume(self, context, instance, volume_id, device=None):

“””Attach an existing volume to an existing instance.”””

# NOTE(vish): Fail fast if the device is not going to pass. This

#             will need to be removed along with the test if we

#             change the logic in the manager for what constitutes

#             a valid device.

if device and not block_device.match_device(device):

raise exception.InvalidDevicePath(path=device)

# NOTE(vish): This is done on the compute host because we want

#             to avoid a race where two devices are requested at

#             the same time. When db access is removed from

#             compute, the bdm will be created here and we will

#             have to make sure that they are assigned atomically.

device = self.compute_rpcapi.reserve_block_device_name(

context, device=device, instance=instance)

try:

# 云硬盘需要实现的方法,也可以参考nova\volume\cinder.py

volume = self.volume_api.get(context, volume_id)

# 检测卷是否可以挂载

self.volume_api.check_attach(context, volume)

# 预留要挂载的卷,防止并发挂载问题

self.volume_api.reserve_volume(context, volume)

# RPC Cast异步调用到虚拟机所在的宿主机的nova-compute服务进行挂载

self.compute_rpcapi.attach_volume(context, instance=instance,

volume_id=volume_id, mountpoint=device)

except Exception:

with excutils.save_and_reraise_exception():

self.db.block_device_mapping_destroy_by_instance_and_device(

context, instance[‘uuid’], device)

# API在这里返回

return device

 

# nova\compute\manager.py:ComputeManager.attach_volume()

@exception.wrap_exception(notifier=notifier, publisher_id=publisher_id())

@reverts_task_state

@wrap_instance_fault

def attach_volume(self, context, volume_id, mountpoint, instance):

“””Attach a volume to an instance.”””

try:

return self._attach_volume(context, volume_id,

mountpoint, instance)

except Exception:

with excutils.save_and_reraise_exception():

self.db.block_device_mapping_destroy_by_instance_and_device(

context, instance.get(‘uuid’), mountpoint)

 

def _attach_volume(self, context, volume_id, mountpoint, instance):

# 同上面的volume_api.get方法

volume = self.volume_api.get(context, volume_id)

context = context.elevated()

LOG.audit(_(‘Attaching volume %(volume_id)s to %(mountpoint)s’),

locals(), context=context, instance=instance)

try:

# 这里返回的是initiator信息,下面有分析

connector = self.driver.get_volume_connector(instance)

# 云硬盘需要实现的方法,下面有cinder的具体实现

connection_info = self.volume_api.initialize_connection(context,

volume,

connector)

except Exception:  # pylint: disable=W0702

with excutils.save_and_reraise_exception():

msg = _(“Failed to connect to volume %(volume_id)s ”

“while attaching at %(mountpoint)s”)

LOG.exception(msg % locals(), context=context,

instance=instance)

# 这个方法也要实现

self.volume_api.unreserve_volume(context, volume)

 

if ‘serial’ not in connection_info:

connection_info[‘serial’] = volume_id

 

try:

self.driver.attach_volume(connection_info,

instance[‘name’],

mountpoint)

except Exception:  # pylint: disable=W0702

with excutils.save_and_reraise_exception():

msg = _(“Failed to attach volume %(volume_id)s ”

“at %(mountpoint)s”)

LOG.exception(msg % locals(), context=context,

instance=instance)

self.volume_api.terminate_connection(context,

volume,

connector)

# 这个方法也要实现,作用是更新cinder数据库中的卷的状态

self.volume_api.attach(context,

volume,

instance[‘uuid’],

mountpoint)

values = {

‘instance_uuid’: instance[‘uuid’],

‘connection_info’: jsonutils.dumps(connection_info),

‘device_name’: mountpoint,

‘delete_on_termination’: False,

‘virtual_name’: None,

‘snapshot_id’: None,

‘volume_id’: volume_id,

‘volume_size’: None,

‘no_device’: None}

self.db.block_device_mapping_update_or_create(context, values)

 

# nova\virt\libvirt\driver.py:LibvirtDriver.get_volume_connector()

def get_volume_connector(self, instance):

if not self._initiator:

self._initiator = libvirt_utils.get_iscsi_initiator()

if not self._initiator:

LOG.warn(_(‘Could not determine iscsi initiator name’),

instance=instance)

return {

‘ip’: FLAGS.my_ip, #宿主机IP地址

‘initiator’: self._initiator,

‘host’: FLAGS.host #宿主机名

}

# nova\virt\libvirt\utils.py:get_iscsi_initiator()

def get_iscsi_initiator():

“””Get iscsi initiator name for this machine”””

# NOTE(vish) openiscsi stores initiator name in a file that

#            needs root permission to read.

contents = utils.read_file_as_root(‘/etc/iscsi/initiatorname.iscsi’)

for l in contents.split(‘\n’):

if l.startswith(‘InitiatorName=’):

return l[l.index(‘=’) + 1:].strip()

 

nova中cinder API封装实现:

# nova\volume\cinder.py:API.initialize_connection():

def initialize_connection(self, context, volume, connector):

return cinderclient(context).\

volumes.initialize_connection(volume[‘id’], connector)

 

调用的是cinder中的initialize_connection,iscsi driver的实现如下:

# cinder\volume\iscsi.py:LioAdm.initialize_connection()

def initialize_connection(self, volume, connector):

volume_iqn = volume[‘provider_location’].split(‘ ‘)[1]

 

(auth_method, auth_user, auth_pass) = \

volume[‘provider_auth’].split(‘ ‘, 3)

 

# Add initiator iqns to target ACL

try:

self._execute(‘rtstool’, ‘add-initiator’,

volume_iqn,

auth_user,

auth_pass,

connector[‘initiator’],

run_as_root=True)

except exception.ProcessExecutionError as e:

LOG.error(_(“Failed to add initiator iqn %s to target”) %

connector[‘initiator’])

raise exception.ISCSITargetAttachFailed(volume_id=volume[‘id’])

 

# nova\virt\libvirt\driver.py:LibvirtDriver.attach_volume()

@exception.wrap_exception()

def attach_volume(self, connection_info, instance_name, mountpoint):

virt_dom = self._lookup_by_name(instance_name)

mount_device = mountpoint.rpartition(“/”)[2]

# 可能需要改动,下面会分析这个方法

conf = self.volume_driver_method(‘connect_volume’,

connection_info,

mount_device)

 

if FLAGS.libvirt_type == ‘lxc’:

self._attach_lxc_volume(conf.to_xml(), virt_dom, instance_name)

else:

try:

# 挂载到虚拟机上

virt_dom.attachDevice(conf.to_xml())

except Exception, ex:

if isinstance(ex, libvirt.libvirtError):

errcode = ex.get_error_code()

if errcode == libvirt.VIR_ERR_OPERATION_FAILED:

self.volume_driver_method(‘disconnect_volume’,

connection_info,

mount_device)

raise exception.DeviceIsBusy(device=mount_device)

 

with excutils.save_and_reraise_exception():

self.volume_driver_method(‘disconnect_volume’,

connection_info,

mount_device)

 

# TODO(danms) once libvirt has support for LXC hotplug,

# replace this re-define with use of the

# VIR_DOMAIN_AFFECT_LIVE & VIR_DOMAIN_AFFECT_CONFIG flags with

# attachDevice()

# 重新define一下,以间接实现持久化的挂载

domxml = virt_dom.XMLDesc(libvirt.VIR_DOMAIN_XML_SECURE)

self._conn.defineXML(domxml)

 

# nova\virt\libvirt\driver.py:LibvirtDriver.volume_driver_method()

def volume_driver_method(self, method_name, connection_info,

*args, **kwargs):

driver_type = connection_info.get(‘driver_volume_type’)

if not driver_type in self.volume_drivers:

raise exception.VolumeDriverNotFound(driver_type=driver_type)

driver = self.volume_drivers[driver_type]

method = getattr(driver, method_name)

return method(connection_info, *args, **kwargs)

def __init__():

……

self.volume_drivers = {}

for driver_str in FLAGS.libvirt_volume_drivers:

driver_type, _sep, driver = driver_str.partition(‘=’)

driver_class = importutils.import_class(driver)

self.volume_drivers[driver_type] = driver_class(self)

volume_drivers是由配置项libvirt_volume_drivers决定的,默认配置是:

cfg.ListOpt(‘libvirt_volume_drivers’,

default=[

‘iscsi=nova.virt.libvirt.volume.LibvirtISCSIVolumeDriver’,

‘local=nova.virt.libvirt.volume.LibvirtVolumeDriver’,

‘fake=nova.virt.libvirt.volume.LibvirtFakeVolumeDriver’,

‘rbd=nova.virt.libvirt.volume.LibvirtNetVolumeDriver’,

‘sheepdog=nova.virt.libvirt.volume.LibvirtNetVolumeDriver’

],

help=‘Libvirt handlers for remote volumes.’),

云硬盘可以使用已有的iscsi driver,也可以参考iscsi实现自己的driver,iscsi driver的内容为:

# nova\virt\libvirt\volume.py:LibvirtISCSIVolumeDriver:

class LibvirtISCSIVolumeDriver(LibvirtVolumeDriver):

“””Driver to attach Network volumes to libvirt.”””

 

def _run_iscsiadm(self, iscsi_properties, iscsi_command, **kwargs):

check_exit_code = kwargs.pop(‘check_exit_code’, 0)

(out, err) = utils.execute(‘iscsiadm’, ‘-m’, ‘node’, ‘-T’,

iscsi_properties[‘target_iqn’],

‘-p’, iscsi_properties[‘target_portal’],

*iscsi_command, run_as_root=True,

check_exit_code=check_exit_code)

LOG.debug(“iscsiadm %s: stdout=%s stderr=%s” %

(iscsi_command, out, err))

return (out, err)

 

def _iscsiadm_update(self, iscsi_properties, property_key, property_value,

**kwargs):

iscsi_command = (‘–op’, ‘update’, ‘-n’, property_key,

‘-v’, property_value)

return self._run_iscsiadm(iscsi_properties, iscsi_command, **kwargs)

 

@utils.synchronized(‘connect_volume’)

def connect_volume(self, connection_info, mount_device):

“””Attach the volume to instance_name”””

iscsi_properties = connection_info[‘data’]

# NOTE(vish): If we are on the same host as nova volume, the

#             discovery makes the target so we don’t need to

#             run –op new. Therefore, we check to see if the

#             target exists, and if we get 255 (Not Found), then

#             we run –op new. This will also happen if another

#             volume is using the same target.

try:

self._run_iscsiadm(iscsi_properties, ())

except exception.ProcessExecutionError as exc:

# iscsiadm returns 21 for “No records found” after version 2.0-871

if exc.exit_code in [21, 255]:

self._run_iscsiadm(iscsi_properties, (‘–op’, ‘new’))

else:

raise

 

if iscsi_properties.get(‘auth_method’):

self._iscsiadm_update(iscsi_properties,

“node.session.auth.authmethod”,

iscsi_properties[‘auth_method’])

self._iscsiadm_update(iscsi_properties,

“node.session.auth.username”,

iscsi_properties[‘auth_username’])

self._iscsiadm_update(iscsi_properties,

“node.session.auth.password”,

iscsi_properties[‘auth_password’])

 

# NOTE(vish): If we have another lun on the same target, we may

#             have a duplicate login

self._run_iscsiadm(iscsi_properties, (“–login”,),

check_exit_code=[0, 255])

 

self._iscsiadm_update(iscsi_properties, “node.startup”, “automatic”)

 

host_device = (“/dev/disk/by-path/ip-%s-iscsi-%s-lun-%s” %

(iscsi_properties[‘target_portal’],

iscsi_properties[‘target_iqn’],

iscsi_properties.get(‘target_lun’, 0)))

 

# The /dev/disk/by-path/… node is not always present immediately

# TODO(justinsb): This retry-with-delay is a pattern, move to utils?

tries = 0

while not os.path.exists(host_device):

if tries >= FLAGS.num_iscsi_scan_tries:

raise exception.NovaException(_(“iSCSI device not found at %s”)

% (host_device))

 

LOG.warn(_(“ISCSI volume not yet found at: %(mount_device)s. ”

“Will rescan & retry.  Try number: %(tries)s”) %

locals())

 

# The rescan isn’t documented as being necessary(?), but it helps

self._run_iscsiadm(iscsi_properties, (“–rescan”,))

 

tries = tries + 1

if not os.path.exists(host_device):

time.sleep(tries ** 2)

 

if tries != 0:

LOG.debug(_(“Found iSCSI node %(mount_device)s ”

“(after %(tries)s rescans)”) %

locals())

 

connection_info[‘data’][‘device_path’] = host_device

sup = super(LibvirtISCSIVolumeDriver, self)

return sup.connect_volume(connection_info, mount_device)

 

@utils.synchronized(‘connect_volume’)

def disconnect_volume(self, connection_info, mount_device):

“””Detach the volume from instance_name”””

sup = super(LibvirtISCSIVolumeDriver, self)

sup.disconnect_volume(connection_info, mount_device)

iscsi_properties = connection_info[‘data’]

# NOTE(vish): Only disconnect from the target if no luns from the

#             target are in use.

device_prefix = (“/dev/disk/by-path/ip-%s-iscsi-%s-lun-” %

(iscsi_properties[‘target_portal’],

iscsi_properties[‘target_iqn’]))

devices = self.connection.get_all_block_devices()

devices = [dev for dev in devices if dev.startswith(device_prefix)]

if not devices:

self._iscsiadm_update(iscsi_properties, “node.startup”, “manual”,

check_exit_code=[0, 255])

self._run_iscsiadm(iscsi_properties, (“–logout”,),

check_exit_code=[0, 255])

self._run_iscsiadm(iscsi_properties, (‘–op’, ‘delete’),

check_exit_code=[0, 21, 255])

也即主要实现了卷挂载到宿主机和从宿主机卸载两个方法。

 

 

2.3     相关代码源文件

nova\volume\cinder.py源文件(云硬盘需要实现的方法或者要封装的API都在这里面):    https://github.com/openstack/nova/blob/stable/folsom/nova/volume/cinder.py

nova\virt\libvirt\volume.py源文件(云硬盘需要实现的driver可以参考这个文件):    https://github.com/openstack/nova/blob/stable/folsom/nova/virt/libvirt/volume.py

# 默认的driver映射关系,可以看出iscsi卷使用的是LibvirtISCSIVolumeDriver

cfg.ListOpt(‘libvirt_volume_drivers’,

default=[

‘iscsi=nova.virt.libvirt.volume.LibvirtISCSIVolumeDriver’,

‘local=nova.virt.libvirt.volume.LibvirtVolumeDriver’,                  ‘fake=nova.virt.libvirt.volume.LibvirtFakeVolumeDriver’,

‘rbd=nova.virt.libvirt.volume.LibvirtNetVolumeDriver’,

‘sheepdog=nova.virt.libvirt.volume.LibvirtNetVolumeDriver’

],

help=‘Libvirt handlers for remote volumes.’),

 

cinder处理各种API请求的抽象类源文件:    https://github.com/openstack/cinder/blob/master/cinder/volume/manager.py

上述抽象类会调用不同的driver去执行实际的动作,完成API的请求,其中iSCSI driver源文件为:

# 默认的volume driver是cinder.volume.drivers.lvm.LVMISCSIDriver

cfg.StrOpt(‘volume_driver’,

default=‘cinder.volume.drivers.lvm.LVMISCSIDriver’,

help=‘Driver to use for volume creation’),

]    https://github.com/openstack/cinder/blob/master/cinder/volume/drivers/lvm.py#L304

它继承了LVMVolumeDriver, driver.ISCSIDriver两个类,其中后一个类所在的源文件为:    https://github.com/openstack/cinder/blob/master/cinder/volume/driver.py#L199    https://github.com/openstack/cinder/blob/master/cinder/volume/driver.py#L339这里的self.tgtadm是在    https://github.com/openstack/cinder/blob/master/cinder/volume/drivers/lvm.py#L321这里初始化的,调用的是    https://github.com/openstack/cinder/blob/master/cinder/volume/iscsi.py#L460这里的方法。

iscsi_helper默认使用的是tgtadm:

cfg.StrOpt(‘iscsi_helper’,

default=‘tgtadm’,

help=‘iscsi target user-land tool to use’),

3.    需要新增的API

  • 扩容云硬盘的API(或者直接调用云硬盘已有的API,但是推荐nova新增一个,这样云硬盘就不必对外暴露任何API了,都可以经过nova来转发处理。)

 

4.    需要注意的问题

  • 之前云硬盘agent实现的一下错误恢复、异常处理逻辑需要在nova里面实现
  • 挂载点在云主机内外看到的不一致问题(因为nova挂载动作是异步的,所以返回给用户的是libvirt看到的挂载点,不是实际的虚拟机内部的挂载点,目前考虑通过查询卷信息接口返回最终的挂载点)
  • 用户及认证问题(之前云硬盘应该用的是管理平台的用户认证逻辑,如果改为使用nova接口,需要使用keystone的用户认证,不知道可否在管理平台那一层转换一下)

 

总的来说云硬盘所需要做的改动应该不大,工作重点在于封装已有的API,提供client即可(参考https://github.com/openstack/nova/blob/stable/folsom/nova/volume/cinder.py),另外driver(参考https://github.com/openstack/nova/blob/stable/folsom/nova/virt/libvirt/volume.py)里面要实现扩容逻辑,应该可以重用agent中现有的代码。