Skip to main content

Equal Opportunities for all OpenStack Projects

A week or 2 later - where are we on fairness for all projects?

So, two weeks ago, I dropped a TC motion and a mailing list post and waited for the other foot to drop.

I was pleasantly surprised - no one started shouting at me - but by trying to not point fingers at individual teams I made the text too convoluted.

So, in an effort to clarify things, here is an overview of what has been said so far, both in the mailing list and the gerrit review itself.

Feedback

... does this also include plugins within projects, like storage backends in cinder and hypervisor drivers in nova?

No - this was not clear enough. This change is aimed at projects that are points of significant cross project interaction. While, in the future there may come a point where Nova Compute Drivers are developed out of tree (though I doubt it), that is not happening today. As a result, there is no projects in the list of projects that would need to integrate with Nova.

Could you please clarify: do you advocate for a generic plugin interface for every project, or that each project should expose a plugin interface that allows plugin to behave as in-tree components? Because the latter is what happens with Tempest, and I see the former a bit complicated.

For every project that has cross project interaction - tempest is a good example.

For these projects, they should allow all projects in tree (like Nova, Neutron, Cinder etc are today), or they should have a plugin interface (like they currently do), but all projects must use it, and not use parts of tempest that are not exposed in that interface.

This would mean that tempest would move the nova, neutron, etc tests to use the plugin interface.

Now, that plugin could be kept in the tempest repo, and still maintained by the QA team, but should use the same interface as the other plugins that are not in that repository.

Of course, it is not just tempest - an incomplete list looks like:

  • Tempest

  • Devstack

  • Grende

  • Horizon

  • OpenStack Client

  • OpenStack SDK

  • Searchlight

  • Heat

  • Mistral

  • Celiometer

  • Rally

  • Documentation

And I am sure I have missed some obvious ones. (if you see a project missing let me know on the motion)

I think I disagree here. The root cause is being addressed: external tests can use the Tempest plugin interface, and use the API, which is being stabilized. The fact that the Tempest API is partially unstable is a temporary things, due to the origin of the project and the way the scope was redefined, but again it's temporary.

This seems to be the core of a lot of the disagreement - this is only temporary, it will all be fixed in the future, and it should stay this way.

Unfortunately the discrepancy between projects is not temporary. The specific problems I have highlighted in the thread for one of the projects is temporary, but I beleive the only long-term solution is to remove the difference between projects.

Before we start making lots of specific rules about how teams coordinate, I would like to understand the problem those rules are meant to solve, so thank you for providing that example. ... It's not clear yet whether there needs to be a new policy to change the existing intent, or if a discussion just hasn't happened, or if someone simply needs to edit some code.

Unfortunately there is a big push back on editing code to help plugins from some of the projects. Again, having the differing access between projects will continue to exacerbate the problem.

"Change the name of the resolution"

—(Paraphrase from a few people)

That was done in the last patchset. I think the Level Playing Field title bounced around my head from the other resolution that was titled Level Playing Field. It may have been confusing alright.

Other Areas

I feel like I have been picking on tempest a little too much, it just captures the current issues perfectly, and a large number of the community have some knowledge of it, and how it works.

There is other areas across OpenStack the need attention as well:

Horizon

Horizon privileged projects have access to much more panels than plugins (service status, quotas, overviews etc). Plugins have to rely on tarballs of horizon

OpenStack Client

OpenStack CLI privileged projects have access to more commands, as plugins cannot hook in to them (e.g. quotas)

Grenade

Plugins may or may not have tempest tests ran (I think that patch merged), they have to use parts of tempest I was told explicitly plugins should not use to get the tests to run at that point.

Docs

We can now add install guides and hook into the API Reference, and API guides. This is great - and I am really happy about it. We still have issues trying to integrate with other areas in docs, and most non docs privileged projects end up with massive amounts of users docs in docs.openstack.org/developer/<project> , which is not ideal.

OpenStack - A leveler playing field

I just proposed a review to openstack/governance repo [0] that aims to have everything across OpenStack be plugin based for all cross project interaction, or allow all projects access to the same internal APIs and I wanted to give a bit of background on my motivation, and how it came about.

Coming from a smaller project, I can see issues for new projects, smaller projects, and projects that may not be seen as "important".

As a smaller project trying to fit into cross project initiatives, (and yes, make sure our software looks at least OK in the Project Navigator) the process can be difficult.

A lot of projects / repositories have plugin interfaces, but also have project integrations in tree, that do not follow the plugin interface. This makes it difficult to see what a plugin can, and should do.

When we moved to the big tent, we wanted as a community to move to a flatter model, removing the old integrated status.

Unfortunately we still have areas when some projects are more equal - there is a lingering set of projects who were integrated at the point in time that we moved, and have preferential status.

A lot of the effects are hard to see, and are not insurmountable, but do cause projects to re-invent the wheel.

For example, quotas - there is no way for a project that is not nova, neutron, cinder to hook into the standard CLI, or UI for setting quotas. They can be done as either extra commands (openstack dns quota set --foo bar) or as custom panels, but not the way other quotas get set.

Tempest plugins are another example. Approximately 30 of the 36 current plugins are using resources that are not supposed to be used, and are an unstable interface. Projects in tree in tempest are at a much better position, as any change to the internal API will have to be fixed before the gate merges, but other out of tree plugins are in a place where they can be broken at any point.

None of this is meant to single out projects, or teams. A lot of the projects that are in this situation have inordinate amounts of work placed on them by the big-tent, and I can emphasize with why things are this way. These were the examples that currently stick out in my mind, and I think we have come to a point where we need to make a change as a community.

By moving to a "plugins for all" model, these issues are reduced. It undoubtedly will cause more, but it is closer to our goal of Recognizing all our community is part of OpenStack, and differentiate projects by tags.

This won't be a change that happens tomorrow, next week, or even next cycle, but think as a goal, we should start moving in this direction as soon as we can, and start building momentum.

Designate Newton Design Summit

https://photos.smugmug.com/Misc/i-wSSCDSC/0/X3/IMG_20160427_201746-X3.jpg

I started writing this on a plane on the way home from Austin, TX where we just finished up the Newton design summit for OpenStack, and finished it a few days later - please excuse any wierdness in syntax or flow :)

Austin was its usual weird, wacky, wonderful self. Everytime we are here, I have a great time, I just can't deal with the heat :).

The summit format this year worked really well - I pretty much stayed in the design summit hotel for the week - and got some very good work done.

6 Months On: Where are we

One of the things I did before this design summit was to look at what we had achived last cycle.

We had a quiet cycle overall, but we did merge some vital features.

Operators no longer need to use the awfull config format we created for pools in Kilo, we now have a much easier to read YAML file that is loaded into the database via the designate-manage cli, we now actually support multiple pools in a real way with the addition of a schedular in designate-central.

Where are we going?

So - the point of the design summit is to plan the future of the projects - and designate is no execption. We were in a (very nice boardroom) room for a few hours - and we talked through quite a few things.

https://photos.smugmug.com/Misc/i-p8SxMgH/0/X2/IMG_20160427_140741-X2.jpg

The collection of etherpads for the summit are available as well.

Golang Replacement of MiniDNS

One of the nicer things about our current architecture is the flexibility that we have because we use the standard DNS protocols to update the target DNS servers. This has the downside however, that we are writing code that deals with DNS packets in python, which is slow. timsim from RackSpace has writen a POC in go that has a very large perfomance improvement.

This needs to documented, and permission requested from the TC to move this component to Go, (and will be a separate post in its own right).

After we got back it turned out that we were not the only project considering this, and swift actually have a feature branch with code in place. So, based on this, we are going to collaborate with them on the integration of Go to OpenStack.

Worker Model

As we dicussed in Galway - we need to replace the current *-manager services with a generic service that can scale out horizontally. As part of this we planned out our upgrade and implementation plans for these services.

Docs, Deprications and more Docs

Docs were a common theme - we were asked to improve them, and also have them located in the main docs on the OpenStack.org website.

We had a member of the docs team in the room, who gave us some great guidenance on how to include our docs in the correct place.

VMT - The application process

On of the teams that supports OpenStack is the Vunrebility Management Team. They deal with disclosures and assigning OSSA numbers to issues that could present and issue to our deployers.

They had a session this summit on how that process might work, and designate was one of the projects chosen to be used as a pilot as I have previously produced Threat Analysis documentation for Designate inside of Hewlett Packard Enterprise - this information is currently being processed for release to the community.

Searchlight

Searchlight is a new (ish) project in OpenStack that enables true searching capabilities on clouds. We have a designate plugin, but there are issues with how we emit notifications from the v1 and the v2 API.

We decided that when we move the Horizon panels to v2, we will just listen for v2 notifications in Searchlight.

API Docs

There was an interesting session on how the community are moving to document their APIs. It will now reside in the projects repo, and is based on a custom sphinx extention that was written for OpenStack.

As this progresses we will migrate designate to these docs, and remove our current docs, as they are much harder to read.

MiniDNS, TCP, and the internet

MiniDNS, TCP, Eventlet and the internet

We had this bug come in yesterday.

It was a bit unexpected - as we tested it pretty extensively when it was being developed.

The line in question was this:

client.send(tcp_response)

In eventlet 0.17.x this behaved like the standard socket.sendall() , instead of socket.send()

(It was this commit as it turns out - it was noted in the release notes here but we missed it)

The other major problem is that the bug did not manifest itself until we pushed the AXFR over a long range connection.

Read more…

Designate Mid Cycle

http://photos.gra.ham.ie/Designate-Mid-Cycle/i-S96NrNb/0/O/IMG_3854.jpg

Designate Mid Cycle

This was a little over a week ago, and I have been trying to get my thoughts down on paper / rst.

It was a good few days overall - we had our usual arguements about implementation of some familiar features, we listed out reasons that our past selves were wrong about everything, and how we should fix our mistakes.

Read more…

Kosmos Mid Cycle

https://upload.wikimedia.org/wikipedia/commons/5/58/Seattle_Center_as_night_falls.jpg

[1]

Kosmos Mid Cycle

We sat down in Seattle last week to plan out how we were going to kick start progess on Kosmos

(For those of you who don't know what Kosmos is - here is the overview on the wiki )

We decided on a few things, I managed to clarify my (admitedly weird and convaluted) thought process, and we started on the inital sets of patches.

It was a small group - some people were sick from the LBaaS / VPN / FW mid cycle the week before, but enough to start.

We covered the DB, the layout, and how we will interact with it. I showed our basic use of the olso_versionedobjects lib, and the basic pecan framework that was in place.

We have the basic design up on our specs repo here ( it is not publishing anywhere yet - I will fix that soon )

If anyone is interested in the project, give me a ping - be it for developing drivers, code, requirements or general interest. (But, really - if you want to write code, just jump in, or ping me for what the hot topics are)

In 2 weeks time I will be off the Designate Mid Cycle - a little closer to home this time - in Galway, Ireland.

Big Rename - Complete!!

http://replygif.net/i/789.gif

It is done !! :)

So, yesterday Tim and Kiall +2'd the change.

It was quite a bit bigger than expected (+4444, -4230) so thanks to all the reviewers who put time and effort into checking everything worked, and I hadn't made any glaring mistakes :).

For anyone interested (or mad) the change is here: Review

The Big Rename - why?

Update

Due to unforeseen circumstances I will not be able to continue work on this until Monday, so we will have to extend the freeze until Tuesday the 17th.

The following message will be put on the patches affected by the freeze.

big-rename-2-update-message.txt (Source)

Due to unforeseen circumstances I will not be able to continue work on this until
Monday, so we will have to extend the freeze until Tuesday the 17th.

On Tuesday 2015-11-17 this -2 will be removed, and code review as normal will continue.

I apologise for the delay, and will get this completed ASAP.

Further updates will be posted to http://graham.hayes.ie/posts/the-big-rename/
https://upload.wikimedia.org/wikipedia/commons/1/1a/Code.jpg

[1]

The Big Rename

So from tomorrow (2015-11-10) until Friday (2015-11-13) Designate will be frozen for new code.

I send an email to the openstack-dev list before the summit [2] so, it shouldn't be a surprise (I hope)

So, why are we doing this?

When Designate started it had a v1 API, which used the term domains for DNS Domains. This was pre Keystone's Domain support, but unfortunately they also decided to use it. AS Designate was not even incubated at the time, there was not much we could do, but continue on.

When we started the v2 API, we decided to avoid confusion and expose "Domains" as "Zones".

This worked for a while, but as we worked on the V2 API, we had to write code to translate the internal code (which referenced Domains) to Zones. this caused code like:

big-rename-render.py (Source)

def _render_object(cls, object, *args, **kwargs):
    # The dict we will return to be rendered to JSON / output format
    r_obj = {}
    # Loop over all fields that are supposed to be output
    for key, value in cls.MODIFICATIONS['fields'].items():
        # Get properties for this field
        field_props = cls.MODIFICATIONS['fields'][key]
        # Check if it has to be renamed
        if field_props.get('rename', False):
            obj = getattr(object, field_props.get('rename'))
            # if rename is specified we need to change the key
            obj_key = field_props.get('rename')
        else:
            # if not, move on
            obj = getattr(object, key, None)
            obj_key = key

and

big-rename-error-render.py (Source)

@classmethod
def _rename_path_segment(cls, obj_adapter, object, path_segment):

    # Check if the object is a list - lists will just have an index as a
    # value, ands this can't be renamed
    if issubclass(obj_adapter.ADAPTER_OBJECT, objects.ListObjectMixin):
        obj_adapter = cls.get_object_adapter(
            cls.ADAPTER_FORMAT,
            obj_adapter.ADAPTER_OBJECT.LIST_ITEM_TYPE.obj_name())
        # Return the segment as is, and the next adapter (which is the
        # LIST_ITEM_TYPE)
        return path_segment, obj_adapter

    for key, value in obj_adapter.MODIFICATIONS.get(
            'fields', {}).items():

        # Check if this field as actually a nested object
        if object.FIELDS.get(path_segment, {}).get('relation', False):

            obj_cls = object.FIELDS.get(path_segment).get('relation_cls')
            obj_adapter = cls.get_object_adapter(
                cls.ADAPTER_FORMAT,
                obj_cls)

            object = objects.DesignateObject.obj_cls_from_name(obj_cls)()
            # Recurse down into this object
            path_segment, obj_adapter = cls._rename_path_segment(
                obj_adapter, object, path_segment)

            # No need to continue the loop
            break

        if not isinstance(
            value.get(
                'rename', NotSpecifiedSential()), NotSpecifiedSential)\
                and path_segment == value.get('rename'):
            # Just do the rename
            path_segment = key

    return path_segment, obj_adapter

which proved to be quite fragile. (Yes that last bit of code is trying to walk the error path of a JSONSchema Error and rename the path to the new terminology)

We are also logging messages about domains - which could prove to be quite confusing when we remove the v1 API and users are only interacting with zones.

So tomorrow morning-ish (UTC) we will start to rename every occurrence of "domain" in the code base to "zone" (without breaking Keystone domain support).

Anything that does not have the "the-big-rename" topic on Gerrit will be getting a procedural -2 from me or the designate-core team.

big-rename-2-message.txt (Source)

Currently Designate is undergoing a code freeze to allow us re-factor the code base,
as announced here: - http://lists.openstack.org/pipermail/openstack-dev/2015-October/077442.html

All code that is not in the "the-big-rename" topic will be getting this procedural -2.

More information can be found here: - http://graham.hayes.ie/posts/the-big-rename/

On Friday 2015-11-13 this -2 will be removed, and code review as normal will continue.

This re-factor will break the majority of patches that are outstanding, and you may need to manually
rebase your patch when we remove the -2.

OpenStack Summit - Designate Report

https://upload.wikimedia.org/wikipedia/commons/7/70/Ginza_at_Night,_Tokyo.jpg

[1]

Tokyo

Oh, what a place. I thought I was ready for the experience, but this city has to be visited to understand. From the 2 or 3 different metro systems that snake across the city, to the 9 story electronics stores, it is something else.

Living in a city that has 3 metro lines that don't actually connect it was a nice change to be able to get reliable public transport :).

Design Summit

This was a much more relaxed summit for Designate. We had done a huge amount of work in Vancouver, and we were nailing down details and doing cross project work.

We got a few major features discussed, and laid out our priorities for the next cycle.

We decided on the following:

  1. Nova / Neutron Integration

  2. Pool Scheduler

  3. Pool Configuration migration to database

  4. IXFR (Incremental Zone Transfer)

  5. ALIAS Record type (Allows for CNAME like records at the root of a DNS Zone)

  6. DNSSEC (this may drag on for a cycle or two)

Nova & Neutron Integration

This is progressing pretty well, and Miguel Lavalle has patches up for this. He, Kiall Mac Innes and Carl Baldwin demoed this in a session on the Thursday. If you are interested in the idea, it is definitely worth a watch here

Pool Scheduler

A vital piece of the pools re architecture that needs to be finished out. There is no great debate on what we need, and I have taken on the task of finishing this out.

Pool Configuration migration to database

Are current configuration file format is quite complex, and moving it to an API allows us to iterate on it much quicker, while reducing the complexity of the config file. I recently had to write an ansible play to write out this file, and it was not fun.

Kiall had a patch up, so we should be able to continue based on that.

IXFR

There was quite a lot of discussion on how this will be implemented, both in the work session, and the 1/2 day session on the Friday. Tim Simmons has stepped up, to continue the work on this.

ALIAS

This is quite a sort after feature - but is quite complex to implement. The DNS RFCs explicitly ban this behavior, so we have to work the solution around them. Eric Larson has been doing quite a lot of work on this in Liberty, and is going to continue in Mitaka.

DNSSEC

This is a feature that we have been looking at for a while, but we started to plan out our roadmap for it recently.

We (I) am allergic to storing private encryption keys in Designate, so we had a good conversation with Barbican about implementing a signing endpoint that we would post a hash to. This work is on me to now drive for Mitaka, so we can consume it in N.

There is some raw notes in the etherpad and I expect we will soon be seeing specs building out of them.

Starting Blogging (Again)

Having taken a year or two hiatus on blogging, I have decided to give it another shot.

A lot has happened since then, and recently I have started to do more and more work in upstream OpenStack. Starting as a PTL I thought it might be useful to have somewhere for me to rant, rave, and generally explain my (admittedly from time to time bizarre) thought process.

I reckon there will be flurries of activity here, followed by silence, but we shall see what happens in time :)