Bloomberg rocked NYC

Survivor: Season 2013 starring Bashar al-Assad

bashar-dinner

Pop Quiz: Syria is (choose one):

  • (A) Iraq in 2003
  • (B) Iraq in 1991
  • (C) Kosovo in 1999
  • (D) Rawanda in 1994
  • (E) None of the Above

If you answered any of the above, it’s simple: you are wrong. This is totally different and yet totally the same as all of the above conflicts and genocides. It is sad and disturbing to type my thoughts from the comfort of a first world country, with the position to criticise dictators in a (somewhat) free state (USA), and have no repercussions. But haven’t we all seen this before, I mean what is truly different from this potential military intervention than say, Iraq? The political pundits will say, the whole WMD mess or rather lack of WMD being found in Iraq has made administrations cautious about future military actions against non-aggression towards the USA. Sure, and then you pile on the Obama platform of “Change” that intended to restore the image of America to the rest of the world, and you get caution internally within the administration, and the external image is also one that is being treaded carefully.

But there so many players here. Let me first state some really obvious things: America used to meet with Bashar al-Assad and his administration. There were joint military discussions, military intelligence sharing in the wake of 9-11, and general working relationships between the USA and Syria. The war in Iraq really messed up the relationship between the US and Syria in that a bunch of nut cases started using Syria as a jumping platform to move into Iraq and fight the US-coalition troops. So things got messed up, but still there was a hospitable relationship between US and Syria based on some good faith between both nations. John Kerry, Bashar al-Assad, and their wives had dinner in Damascus in 2009.

Now when I say hospitable, I mean to the outside. Behind the scenes, the US was throwing money behind anti-government organizations with Syria to fight against Assad’s control. A few million here and there will buy you TV time and you can eat away at the power at the top.

This is an oil rich nation that trades heavily with Russia. But the nation has more strategic value than purely a resource play; the location is key to creating pressure against Iran, which when that topples will pretty much erase (in a naive sense) the list of strong nations that oppose the major world powers.

So what are the motivations? Obama waited way to long to justify the “for the good of the women and children” angle. Already nearly 100k persons in Syria have been killed in the 2 years of fighting, and the “red line” being chemical weapons is a false statement. There were chemical weapons previously used in 2012, and depending on what you classify phosphorous as, well you could have multiple viewpoints. Chemical weapons are clearly not just Sarin, Anthrax, and Ebola virus.

Russia supplies military-grade weapons to Syria. How many? Roughly 1.5Billion dollars of weapons and a pipeline for more orders in the future. We are talking about fighter jets, attack helicopter, but what really got the world stage so upset was the sale of surface to air missiles which scares the hell out of Israel, and frankly any NATO or coalition force that is interested in changing the power players in that region.

So Russia benefits financially with Syria. They also benefit strategically by having allies in that region to continue to share a somewhat anti-US sentiment as Syria can easily take on the appearance of the larger Russian state’s persona.

There is clearly genocide in Syria. The shellings, the murder, the chemical attacks, and the 1/3 of the population being refugees as the borders overflow and folks make their way to forsaken countries like Iraq. “Out of the flying pan, into the fire”.

But what must be done. In Rawanda and Liberia, the world watched as murder on the scale not seen since (well since Vietnam/Cambodia/Laos/Myanmar), took place. And the world powers or the UN could have done something about it. And with that comes the balance: there is always this question of imperialism or isolationism that gets tossed around like a coin. As if it would be one or the other. There is the extreme view that we (USA) play a world role in protecting the world from the bad guys, and the contrasting view is that we should not be “nation building”, in that we need to take care of our own problems. “We need to fix Detroit before we fix Damascus, etc, etc”.

But we as citizens of a larger group of ideas need to consider that our governments are not “good”. They never had been. They act with interests in mind. Sometimes our own, sometimes there is a far more complex system at play. Just like when you applied for the lead in the school play and you didn’t get it. Maybe you were a better tree or backdrop.

Attacking Syria and bombing the hell out of Assad might be the right thing on a small scale, with an intention to “hurt the regime”. But at this stage in the game, it seems highly unlikely that good could come of it. Yes, it is evil that someone on any scale would hurt women and children, and for that, a world stage should be set to save people from their certain destruction. We as a world should not turn a blind eye to the genocide, and yet we ask the UN for their acceptance of this action. The same UN that has agreed to have Russia and China sit at the security world council, which have blocked measure after measure for UN missions to stop bloodshed of civilians. Profiteering is always popular during wartime. The US has benefitted immensely in times of war.

We as citizens of the world can not blindly accept that millions of people are in danger, that hundreds of thousands die, and that we do nothing. Yes there are al-Qaeda fighters right along side the FSA, and yes there are folks that want to rise to power in Syria with blood on their hands. But there are also children and families that have nothing to do with this, and they are completely powerless to the situation that built around them. Just protect them as best you can.

 

Now of course my opinion would not convince the strongest of the Ron Paul supporters (I voted for him too!); but I have dropped my flag waving, and my Noam Chomsky book waving, or my leftish-leaning ways to say: sometimes it’s OK to do something that is not pure. Sometimes it’s OK to support a cause to hurt an aggressor, even when you know the history in detail and you know that it’s not a simple Good vs Bad argument. In this case, I think the military intervention is hardly thought out well, it’s way too late, it’s wrong internationally, it’s wrong legally, but it’s right to stop the killing of people, even if it’s just for now.

 

 

The Big Meaning and Meaningless Books

I wrote this short piece of fiction in the year 2000, entitled, “The Big Meaning and Meaningless Books”. It was written by pen in a notebook, a few pages at a time at night, until finally I typed it up in LaTeX and then HTML. I shared the story with a few friends and I used to have it published on truman.net in the early 2000s. I finally took some time to edit the story, clean a few things up, I found a few spelling mistakes, and some errors in grammar I decided to keep; just to preserve the original. It’s like looking back at an old style that is no longer me, but it was interesting.

http://www.amazon.com/The-Meaning-Meaningless-Books-ebook/dp/B00DT7BIR0/ref=sr_1_1?s=digital-text&ie=UTF8&qid=1373169110&sr=1-1&keywords=the+big+meaning+and+meaningless+books

Anthony Whalen

Anthony Whalen. A brother, a cousin, a son, a nephew, a scientist, a designer, a dreamer, a friend, a philosopher, an intellectual.

I am so proud to have known him, to have shared this planet with him; his passion and creativity were shared with us all.

How proud and honoured I was that he was the best man at our wedding – my brother, my cousin, my friend.

To have known Anthony was to know his stories, his adventures, his life — a life that was shared — that was welcoming, warm, and engaging.

Anthony was beaming with creativity, with energy, and passion. But most importantly — with drive. He taught me how to ride a bike, how to swim, and most importantly, how to live.

At an extremely young age, Anthony was stitching and designing his own clothes; making modifications to his shoes to enable him to skateboard better. He understood the mechanics of materials, of forces, of systems. Skateboarding was just another system that he could improve.

Growing up and spending time at 1151 Evergreen Ave, meant that science and learning were everywhere. The Whalen family has always encouraged science and art — and at Anthony’s disposal were the appropriate tools.

Tools like electric grinding stones, beakers, liquid mercury, hydrochloric acid, acetylene, oxygen, and need I mention *fire*.

There was learning everyday with Anthony — and we would construct collaborative pseudo-lesson plans, usually consisting of building something, destroying something, and eventually hiding something.

Anthony and I would dig huge tunnels in my backyard much to our parents dismay. It seemed like entire weeks were devoted to seeing how deep we could go. Anthony would draw diagrams of the fort that were designing — and this was when he was no older than eight.

Anthony once came across a small sailboat; which one summer we spent ages painting it, sealing it, hoisting a battered and torn sail to a mast; dreaming of places that we would sail it. Never really knowing that our parents had no intention of it ever touching water.

But water would be reached. One chilly October in Monmouth Beach, we, ten and twelve years old respectively, carried an aluminum row boat down the street — the air so cold we had to warm our bare hands with out breath — we carried the boat down Riverdale Avenue, around to the reeds — and then launched.

It took us about thirty minutes to get to the middle of the Shewsbury River, and then once the wind picked up we struggled to row back to shore.

Anthony — ever encouraging and positive about challenges continued to yell out encouraging words — we will make it — another twenty minutes and we will make it.

Anthony was always this kind of spirit; alive. Inviting. Helping and brotherly.

When we weren’t outside as kids; we played video games. Anthony found all the secrets in games — he had a way to see systems, and to reverse engineer how they might work. He was the first person I knew of that figured how to get unlimited 1 UP’s in Super Mario Bros by jumping on the turtle shell in level 3-2.

Anthony was a person that truly lived – he touched our lives with his creativity, his intelligence, his love, his kindness. He was a real person that spoke about life experiences, shared his stories, and we became new.

New in the sense that we were more enlightened, more curious, more like him.

Thirty-five years young: but what a life he lived — surfing, riding, teaching us and loving us. He is always with us, we were affected by him, and his life will always be with us — as we tell his stories, think about his smile, and remember this beautiful man.

Building a Net!

computer-seuss-cat
It seems building a network is full of disaster,
there are choices to be made more sticky than plaster.
Most protocols talk but the new ones do not,
and we don’t want to be stuck with EIGRP rott.

They are complex and closed like a private race track,
We thought STP was dead, but it’s TRILL that is back.
My advice to my friends is to chose what is chosen,
use lots of good drafts from Kompella and Rosen.
Be afraid of the vendors that meddle and try,
to make you chose products that will wither and die,
but chose with your money, your money and buy.

Buy open systems, with drafts you can read,
do-it-yourself or hire PS if you need.
And script it all up, and make sense of the mess,
use NETCONF or YANG or some other fine REST.

Blackboxes may work, but the understanding they lack,
it’s a terrible choice for a new top-of-rack,
your servers must send, through pipes that do bend,
and when black boxes stop, it’s protocols that must mend,
to advertise vectors, and places of reach,
and this is why I stand on my soapbox and preach.

When building a fabric, one must have to think,
it’s a slippery slope like a ice hockey rink.
Do I want a good product or build it yourself?
QFabric or Cisco have a method that’s theirs,
and TRILL or OpenFLow have me jumping from chairs.

Make sure that the choice has no method to trap,
from signaling, datapath, and even encap.
A packet’s a packet, but with multicast it grows,
make sure that your mroute state copes with the flows,
and that over your tunnels you can compose,
efficiency is realised when transport is exposed.

No ships in the night, it must work over and under,
your building a network that’s full of wonder.
No crashing above like a sky full of thunder.

Before you CONF T, and before you ENABLE,
make sure that you sorted out your Ethernet cable,
is it fibre and optics, or copper and DAC,
one’s cheaper than the other, the others a pain in the rack.

Do you put a big box at the end of the row?
Or you looking to scale, are you looking to grow?
If that is the case, at the top it must be,
but are you comfortable with a LEAF SPINEY TREE?
Is it lots of gigabits do you need 40GE?

Have no fear how you build it,
In you I do trust,
you will scale it and scale it, and scale it you must!
Your links will come up, your lights will blink to the brightest,
They will blink on the leftest, they will blink on the rightest!
All your packets will forward, your TTL’s will change,
and most of the time the errors will range,
your links will stay up, your WAN will as well,
it’s because you hooked your syslog to a bell!

In 10 years from now when we are all IPv6,
we will be proud of your challenge, your challenge and risk!
You could have just listened to your vendor with vigor,
but you built a fantastic, vertical and horizontal figure;
you linked up the east, and you linked up the west,
and for that your HADOOP HDFS is the best.
You hooked up the north, for your traffic to enter,
We took all the data and put it in the center.

Ships in the Night Cause Me Fright

Ships in the Night
There is a common trend in networking and virtualization today: Make things automated. This is a positive step. For the longest, we have rewarded complexity. Take for example, the focus of most professional networking certifications on corner case scenarios with arbitrary routing protocols being redistributed into other protocols with heaps of complex policy.

Now there seems to be a movement to “hop over” the complexities that exist in the network by building overlays. It’s not entirely a new concept, as MPLS over GRE, IPSEC mesh VPNS, or even PPP/L2TP have long been our precedents for constructing connectivity over existing transports.

It’s fantastic that we are building with layers and leveraging existing frameworks and taking the new moving part up to another layer. Take for example the trend in hypervisor networking for overlays: the Data Center network remains simple, maybe a limited number of VLANS for all the hypervisors, and with tunnels between hypervisors allows for very flexible network constructs.

But when we create abstraction layers in networking, the benefits of moving things up one layer can also provide complexity and lack of event propagation between layers.

In the heyday of MPLS adoption in Service Provider networks, I saw various carriers consider and sometimes deploy large scale VPLS backbones for their core (composed of P/PE nodes), which provided MPLS Ethernet transport over either 10GE or SDN/SONET, and then they fully meshed their Multi-Service Edge MPLS nodes (ie. Purely PE) to this new VPLS domain. What they were trying to achieve was simplicity on their high touch PEs, they wanted a service to be anywhere and they wanted to have a simple topology so that all PEs can directly connect to each other PE. They were able to realize this goal, but in doing so they created other challenges. In this isolation of layers, there are issues because we clearly have protocols and behaviour that work independently, yet we care about the end-to-end communication.

In Data Center networks there will typically be less transport layer flapping or circuit issues than experienced in Wide Area Networks, yet we should still consider that event propagation is important because it allows protocols and algorithms to learn and do things better. For example, in large scale DCI there will likely be a need for Traffic Engineering across un-equal paths. In order to properly build overlays it is important to understand the underlay topology and behavior. In MPLS networks (which can be an underlay and overlay) we already have features such as Fast Re-Route, bypass tunnels, BFD and OAM. These protocols might not be required in DC networks, but some of the intrinsic benefits of them should be realized. If we use stateless tunnels in overlay networks, then how do we re-route when issues occur in the transport? What if there is a scheduled upgrade of a switch or router that will cause disruptions in service availability or speed? We should have a way to link the knowledge of the underlay with the overlay. Without this, networking is taking a large step backwards.

Overlay networking that is using VXLAN, NV-GRE, STT, CAPWAP, etc is highly useful to keep portions of the complexity away from the transport, in the same way that MPLS labels have no relation to the MAC forwarding tables in underlay Ethernet switches. But we have always learned in networking that having more knowledge about vectors, about links, and about load, then we are able to make more solid decisions and take actions.

Presentation at SDN Steering Group Meeting with Financials

Traditional Protocols Matter with SDN

First of all, I have been saying this for over a year: We have been doing SDN for a long time. We, as an industry, have been programming changes into our networks, with our own vision of what needs to happen, when it needs to happen, and doing things that protocols do not normally offer.

Software Defined Networking is useful when your protocols don’t cut it.

Let’s face it. Almost every protocol (OSPF, IS-IS, RIP, MP-BGP, etc) does one thing: it advertises vectors, or reachability of a destination. Some protocols do it better than others, some allow for more data to come along with those vectors such as tags, communities, targets, etc. Then you strap some logic on your routing nodes to interpret the received routing data, and you make decisions. Remember Policy Based Routing (PBR)? How is this any different than SDN? I would venture to say SDN offers a possibility to achieve many of the ideas discussed 8-10 years ago during PBR discussions.

The Method Matters

I have spent time immersed in SDN implementations, presentations, vendor meetings, and even knee-deep in the code. When you want to make changes to your network there are some basic questions that need to be asked:

  • Does this network change need to occur dynamically?
  • How can we quantify dynamic; is this every day, every week, or changes to occur in a totally flexible non-deterministic manner. (ie. people clicking things when they want will make changes to the network)
  • Do we have the tools to prevent bad stuff from happening. (Imagine that you allowed for the dynamic creation of MPLS LSPs across your network. What would happen if you did not implement safe upper limits on RSVP-TE objects or no limits on number of paths?)
  • How complex will this “simple program” be?

Protocols are Software

Protocols have state machines; they learn and distribute information and they have well defined routines with clearly articulated and implemented behavior. This can be said for the majority of the well used routing protocols that are defined by the IETF. Furthermore, people have learned how to work with these protocols, and for the protocols that allow additional extensions to their functionality, there is almost anything you can do with the flexible framework. Think: IS-IS TLVs or new BGP SAFI/NLRI/GENAPP.

There is already so much that the networking software industry has made available to developers and operators. As we want to programmatically make configuration changes and drive ‘show commands’ on nodes we could use Expect libraries in any number of modern programming languages (PERL, Python, Ruby, etc), or drive alternative data models with NETCONF, RESTAPIs, YANG, OpenFlow Protocol, or SNMP.

So with all this choice comes the paralysis of complexity. Let’s take CLOUD for example, and think about the need for some flexibility in network topology and automation. In a very simple scenario, each tenant in a cloud environment will need some basic network isolation and routing. In enterprise clouds, this looks like a VLAN with some DHCP assignment. The cloud management system would work well if the association of a Virtual Machine (VM) to a network was flexible; and thus it would be useful to allow for the automation in the creation of the VLAN.

In OpenStack this could be achieved by nova-network which if you are using Linux Bridging, will gladly create a new VLAN, IP address, and bridge instance in the kernel. The challenge about automation, is that there usually is more to the picture; the on-ramp and off-ramp of traffic beyond the hypervisor. This could be the VRF interface binding, or VLAN creation on the ToR, or security policies on ToR or other Layer 2/3 devices that will process frames/packets for this newly instantiated network.

Sure, Quantum would be a useful element to create things dynamically. We could even find a way to have Quantum drive RESTAPIs over to an OpenFlow controller, or have Quantum directly make changes to upstream devices or ToRs. Then things get very complex because we have decided that there are different ways to propagate change events into our network.

When it comes to networking, there is a science. There is an agreed upon method on distribution of information, and there are agreed upon data formats. There have been plenty of protocols that were vetted in the IETF only to have vendors have different takes upon implementation, and then vendors either duke it out in the IETF or the consumer market demands that the vendors fix their code and make things interoperate. Interoperating is good for the consumer, and in the long run, it is probably even good for the vendor if they wish to gain customers through good will and upgrade strategies.

Today, as a technologies in the field of networking, I have a lot of ways to build cloud networking. I could fully embrace the overlay networking camp and do something like this:

  • Build a highly scalable Layer 3 Data Center Fabric
  • Install some cutting-edge bridging and tunneling software on each hypervisor (OVS or similar)
  • Pre-build all my public-facing network constructs

Now, all of this web of software might just work; but imagine when it doesn’t. In a scenario of network failure, we need to start from some layer. Which layer first? Is the issue related to transport between hypervisor? If so would this issue relate to a true transport problem, or an issue with tunnel encapsulation or transmission on the hypervisor? Are VLANs necessary on the ToR? Is the ToR routing correctly between other ToRs?

The argument that I believe needs to be brought the table, is that complexity of network control in software needs to be very similar to existing behavior, or it needs to have extremely open details about it’s intended behavior. This is what protocols bring to the table: they are not black boxes; you can read up on how they operate and pretty quickly you will understand why something is working or not.

BGP is the Truth

I had this discussion the other day with one of BGP’s creators; I told him that “I can get anyone off the street and ask them to show RIB-OUT and RIB-IN between two systems and get the truth; for me BGP is the truth”. Yes I trust some well defined XML-RPC exchanges or REST interfaces; but what I really like about BGP and other well defined protocols is that they have clear definitions and behavior. Of course there is enough rope provided to hang yourself when you get into complex VPN topologies with leaking Route Targets between a series of tables. But this is the fun of having a tool like this in your tool belt. You are free to have seriously complex or easy to understand network constructs.

Back to a previous thought as I started above; in CLOUD networking, we usually need to create a VLAN per tenant; but if this was too complex to automate outside of the hypervisor, we might just pre-build all the VLANs on the ToRs and L2 infrastructure. However, the downside to this approach is that there could be inefficient layer 2 domains. A much better approach would involve having MVRP running between the Virtual Switch on the hypervisor and the ToR. With MVRP we could have OpenStack’s Quantum or OVS simply advertise the presence of the VLAN to the switch and the layer 2 domain would be dynamically programmed on the directly connected ToR.

All in all, my thoughts are that it’s exciting to create new models for connectivity between hosts and nodes in networks; but we need to ensure that enough simplicity or “nodes of truth” exist. There should be authoritative sources of truth on reachability, and for the last 30 years this has been visible in two places: the RIB and the FIB. When we move into new domains that have their own concepts of state, control , and abstraction, we need to have a way to monitor this new behavior. It’s not entirely a new concept; today even in the networking world we rarely see a true mix of optical transport layers (DWDM, SDH/SONET, etc) and Layer 2/3, but it will remain important to remember there is still a lot to be learned about scale from the Internet. Tools like MP-BGP, Route Reflectors, Route Targets, NETCONF, etc are just are useful today in a world of OpenFlow, Quantum Plugins, and new Linux Kernel Modules. The future is now, and the past is important.

 

 

Roundcube on Nginx+PHP5fpm+Ubuntu

At some point in time I will give full details on how I managed to get this going. But the one thing that may save someone a bunch of time:

http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=620218

Which essentially details a good workaround to fix up Net_IDNA2 like this:

root@trust:/var/log/nginx# pear install channel://pear.php.net/Net_IDNA2-0.1.1
downloading Net_IDNA2-0.1.1.tgz …
Starting to download Net_IDNA2-0.1.1.tgz (24,193 bytes)
……..done: 24,193 bytes
install ok: channel://pear.php.net/Net_IDNA2-0.1.1
root@trust:/var/log/nginx#
root@trust:/var/log/nginx# tail -f webmail.suspicious.org.error ^C
root@trust:/var/log/nginx# /etc/init.d/php5-fpm restart
* Restarting PHP5 FastCGI Process Manager php5-fpm
…done.

 

Electric vehicles

Lately I have been looking at electric vehicles as a second car. I take public transport so having a limited distance car doesn’t seem so bad. The Nissan Leaf looks the most attractive, especially with incentive leasing offers. I also noticed there are some folks that upgraded the trickle charger to support rapid charging with 240v ac on the home charger instead of buying the commercial rapid charger which sells for nearly 2k.

Proudly powered by WordPress
Theme: Esquire by Matthew Buchanan.