OpenStack infrastructure automation with Terraform – Part 1

Update: The Openstack provider has been merged into terraform. It comes with the terraform default download as of 0.4.0.

Get it HERE: https://terraform.io/downloads.html

Then proceed directly to the second part of this series to get up and running with Terraform on Openstack quickly!

Or.. read more below for the original post.

Continue reading OpenStack infrastructure automation with Terraform – Part 1

CF Push and case insensitive clients

So here’s a weird one that may save someone some time…..

Trying to perform a cf push with a .jar file. Get’s the following strange error!

 

Lots of head scratching;

Looks like (and is) a local error, unpacking the JAR, we haven’t even touched the PaaS/cloudfoundry yet!

Also, unpacking the JAR manually seems to complain also, hmm.

You’re running MacOSX? … (Or maybe Windows?)

Looks like the issue is because the JAR has been built on a CASE-SENSITIVE OS (in our case, Jenkins on Linux)

… and you’re trying to run cf push on a CASE-INSENSITIVE OS (in our case, a 2013 Retina MacBookPro).

Workaround..

Run the same cf push from a linux box and it works fine.

This link put me onto the issue (as the error is more than a bit confusing!);

https://groups.google.com/forum/#!topic/selenium-users/f8OMertwzOY

Hope this helps someone!

Openstack and PaaS; Love a good geek rift!

I’ve been in the bay area, CA for a couple of weeks now (excluding my cheeky jaunt to vegas!) and even though i’m now on vacation, it’s been the perfect place to watch the OpenStack Havana Drama unfold; Mostly stemming from this catalyst;

http://www.mirantis.com/blog/openstack-havanas-stern-warning-open-source-or-die/

Well (for me, anyway) especially this bit;

Too many times we see our customers exploring OpenShift or Cloud Foundry for a while, and then electing instead to use a combination of Heat for orchestration, Trove for the database, LBaaS for elasticity, then glue it all together with scripts and Python code and have a native and supported solution for provisioning their apps.

Hell no! Was my initial reaction, and while there has been a definite retraction from the tone of the whole post… I still think a hell no is where I stand on this.

And i’ll tell you why, but firstly;

  • I like Openstack, as an IaaS. I like it’s modularity for networking and the innovation taking place to provide a rock-solid IaaS layer.
  • It was a much needed alternative to VMWare for a lot of people and it’s growth into stability is something i’ve enjoyed watching (competition is never a bad thing right! 😉 ).

That said, here’s why I’ll take my PaaS served right now, with a sprinkling of CloudFoundry;

  • People tying things together themselves with chunks of internally-written scripts/python (i’d argue even puppet/chef as we strive for more portability across public/private boundaries) is exactly the kind of production environment we want to move away from as an industry;
    • Non-portable.
    • Siloed to that particular company (or more likely, project/team.)
    • Often badly maintained due to knowledge attrition.

.. and into the future;

  • Defined, separated layers with nothing connecting them but an externally facing API was, in my mind, the very POINT of IaaS/PaaS/XaaS and their clear boundaries.
  • These boundaries allow for portability.
    • Between private IaaS providers and the PaaS/SaaS stack.
    • Between public/private cloud-burt style scenarios.
    • For complex HA setups requiring active/active service across multiple, underlying provider technologies.
      • think ‘defence in depth’ for IaaS.
      • This may sound far fetched, but actually is and has already been used to back SLA’s and protect against downtime without requiring different tooling in each location. 
    • I just don’t see how a 1:1 mapping of PaaS and IaaS inside OpenStack is a good thing for people trying to consume the cloud in a flexible and ‘unlocked’ mannor.

It could easily be argued that if we are only talking about private and not public IaaS consumption, i’d have less points to make above; Sure, but I guess it depends on if you really believe the future will be thousands of per-enterprise, siloed, private IaaS/PaaS installations, each with their own specifics.

As an aside, another concern I have with Openstack in general right now is the providers implementing OpenStack. Yes there is an OpenStack API, but it’s amazing how many variations on this there are (maybe i’ll do the maths some day);

  • API versions
  • Custom additions (i’m looking at you, Rackspace!)
  • Full vs. Custom implementation of all/some OpenStack components.

Translate this to the future idea of PaaS and IaaS being offered within OpenStack, and i see conflicting requirements;

From an IaaS I’d want;

  • Easy to move between/consume IaaS providers.
  • Not all IaaS providers necessarily need the same API, but it would be nice if it was one per ‘type’ to make libraries/wrappers/Fog etc easier.

From a PaaS i’d want;

  • Ease of use for Developers
  • Abstracted service integration
    • IaaS / PaaS providers may not be my best option for certain data storage
    • I don’t want to be constrained to the development speed of a monolithic (P+I)aaS stack to test out new Key-Value-Store-X
  • Above all, PORTABILITY

This seems directly against the above for IaaS…

Ie, I don’t mind having to abstract my PaaS installation/management from multiple IaaS API’s so that I can support multiple clouds,(Especially if my PaaS management/installation system can handle that for me!); however i DON’T want lots of potential differences in the presentation in my PaaS API causing issues for the ‘ease of use, just care about your code’ aspect for developers.

I’m not sure where this post stopped becoming a nice short piece and more of a ramble, but i’ll take this as a good place to stop. PaaS vendors are not going anywhere imho and marketing-through-bold-statements on the web is very much still alive 😉

Matt

 

 

 

 

Pivotal Cloud Foundry Conference and Multi-Site PaaS

So I recently got back from Pivotal’s first Cloud Foundry conference;

pivotalcfconf

 

as I’m not a developer, I guess by the power of deduction I’ll settle for cloud leader.

While there, this newly appointed cloud leader, erm, lead a pretty popular discussion on multi-site PaaS (with a focus on Cloud Foundry, but a lot of the ideas translate) with the intention of stirring up enough technical and business intrigue to move the conversation into something of substance and action.

I’m about to kick off further discussions on the VCAP-DEV mailing list (https://groups.google.com/a/cloudfoundry.org/forum/#!forum/vcap-dev), but draft run here can’t hurt

  • Having multiple separate PaaS instances is cheating (and a pain for everyone involved)
  • My definition of multi-site PaaS is currently to support application deployment and horizontal scalability in two or more datacentres concurrently, providing resilience scalability beyond what is currently available.
  • The multi-site PaaS should have a single point of end user/customer interaction.
  • Key advantages should be simplified management, visibility and control for end customers.
  • Multi-Site PaaS should be architected in such a way as to prevent a SPOF or inter-DC dependencies

All well and good saying WHAT i want, how about how?

  • A good starting point would be something that sits above the current cloud foundry components (ie, something new that all api calls hit first) that could perform extensions to existing functions;
    • Extend API functionality for multi-site operations
      • cf scale –-sites 3 <app>
      • cf apps –-site <name>
      • etc
    • Build up a map of PaaS-wide routes/DNS A records to aid in steering requests to the correct sites ‘Gorouters’
    • This may also be a good place to implement SSL with support for per-application SSL certificates as these will need to be available in any site the application is spanned to also.
  • Further thoughts that i need to pin down
    • I’d like to see this getting health state from each DC and redirecting new requests if necessary to best utilize capacity (taking into account latency to the client of the request)

Where this layer will sit physically, and how it will become this unification layer without becoming the SPOF point itself is still playing around my mind.
Current thoughts include;

  • Calling the layer Grouter (global-router) for convenience.
  • This layer will be made up of multiple instances of GRouters
  • Passive listening to NATS messages WITHIN the G-Router instances site for all DNS names/apps used at that site.
  • Distributed protocol for then sharing these names to all other Grouters (NOT A SHARED DB/SPOF).
    • No power to overwrite another G-Routers DB
    • Maybe something can be taken from the world of IP routing where local routes outrank that of remote updates, but can be used if need be.
    • Loss of DB/updates causes Grouter instance to fail open and forward all requests to local GoRouters (failure mode still allows local site independent operation).
    • DNS should only be steering real PaaS app requests or API calls to the GRouter layer although may be for different DC, Depending on use of Anycast DNS *needs idea development*
    • Grouters In normal mode can send redirects for requests for Apps that are In other DC’s to allow a global any-cast DNS infrastructure
  • Multiple instances per DC just like GoRouters so they scale.

Other points which will need discussion are;

  • How does a global ‘cf apps’ work
  • Droplets will need saving to multi-DC blob stores for re-use in scale/outage scenarios
  • We will need a way to cope with applications that have to stay within a specific geographic region.
  • Blue/Green zero downtime deployments?
    • One site at a time
    • Two app instances within each DC and re-mapping routes
    • Second option would prevent GRouter layer needing to be involved in orchestration, reducing complexity.

My next steps are to map out use cases and matching data flows embodying my ideas above.

Just FYI, all this was written on a SJL>LHR flight were awake-ness was clinging to me like the plague, so expect revisions!

Matt