Goodbye The Network Way, welcome nyechiel.com!
At least up until today, I used this blog to document technical stuff. But with May being Mental Health Awareness month, and with many of us spending most of our days at home amid the spread of COVID-19, I’ve been thinking a lot about mental health recently. I wanted to put together some of my thoughts, even if somewhat random, here.
Red Hat Summit is one of my favorite events of the year. It brings together customers, partners, community members and Red Hatters to talk about the open source innovations that are enabling the future of enterprise technology. Seeing the number of attendees growing year after year is also impressive and reassuring, as more and more folks are showing interest in Red Hat and its growing portfolio of products.
tl;dr - as of February 2020, I am back at Red Hat, focusing on OpenShift multi-cluster networking.
2019 brings a new beginning for me. After a little over five years at Red Hat, I have decided to move on to my next challenge. This is a big move, and I wanted to take a moment and reflect back on those years.
A post I wrote for the Red Hat Stack blog, on key networking features included in Red Hat OpenStack Platform 13. Read more here: Red Hat OpenStack Platform 13: five things you need to know about networking.
A short post I wrote for the Red Hat Stack blog, on what Red Hat is doing with OpenStack and OpenDaylight. Read more here: SDN with Red Hat OpenStack Platform: OpenDaylight Integration.
I recently attended the Red Hat Summit 2016 event that took place at San Francisco, CA, on June 27-30. Red Hat Summit is a great place to interact with customers, partners, and product leads, and learn about Red Hat and the company’s direction. While Red Hat is still mostly known for its Enterprise Linux (RHEL) business, it also offers products and solutions in the cloud computing, virtualization, middleware, storage, and systems management spaces. And networking is really a key piece in all of these.
A post I wrote for the Red Hat Stack blog, trying to clarify what we are doing with RHEL OpenStack Platform to accelerate the datapath for NFV applications.
In my previous post I described my Cumulus VX lab environment which is based on Fedora and KVM. One of the first things I noticed after bringing up the setup is that although I have got L3 connectivity between the emulated Cumulus switches, I can’t get LLDP to operate properly between the devices.
Cumulus Linux is a network operating system based on Debian that runs on top of industry standard networking hardware. By providing a software-only solution, Cumulus is enabling disaggregation of data center switches similar to the x86 server hardware/software disaggregation. In addition to the networking features you would expect from a network operating system like L2 bridging, Spanning Tree Protocol, LLDP, bonding/LAG, L3 routing, and so on, it enables users to take advantage of the latest Linux applications and automation tools, which is in my opinion its true power.
In the previous post I briefly described the fact that many networks today are closed and vertically designed. While standard protocols are being adopted by vendors, true interoperability is still a challenge. Sure, you can bring up a BGP peer between platforms from different vendors and exchange route information (otherwise we couldn’t scale the Internet), but management and configuration is still, in most cases, vendor specific.
I have been involved with networking for quite some time now; I have had the opportunity to design, implement and operate different networks across different environments such as enterprise, data-center, and service provider - which inspired me to create this series of short blog posts exploring the computer networking industry. My view on the history, challenges, hype and reality, and most importantly - what’s next and how we can do better.
IPv6 offers several ways to assign IP addresses to end hosts. Some of them (SLAAC, stateful DHCPv6, stateless DHCPv6) were already covered in this post. The IPv6 Prefix Delegation mechanism (described in RFC 3769 and RFC 3633) provides “a way of automatically configuring IPv6 prefixes and addresses on routers and hosts” - which sounds like yet another IP assignment option. How does it differ from the other methods? And why do we need it? Let’s try to figure it out.
(This is a summary version of a talk I gave at OpenStack Israel event on June 15th, 2015. Slides are available here).
A post I wrote for the Red Hat Stack blog on what’s coming in OpenStack Networking for the Kilo release. Check it out here.
The concept of Link Aggregation (LAG) is well known in the networking industry by now, and people usually consider it as a basic functionality that just works out of the box. With all of the SDN hype that’s going on out there, I sometimes feel that we tend to neglect some of the more “traditional” stuff like this one. As with many networking technologies and protocols, things may not just work out of the box, and it’s important to master the details to be able to design things properly, know what to expect to (i.e., what the normal behavior is) and ultimately being able to troubleshoot in case of a problem.
Red Hat Enterprise Linux OpenStack Platform 6: SR-IOV Networking - Part II: Walking Through the Implementation
Second part of the SR-IOV networking post I wrote for the Red Hat Stack blog.
Check out this blog post I wrote for Red Hat Stack on SR-IOV networking support introduced in RHEL OpenStack Platfrom 6. This is based on the Nova and Neutron work done at the upstream community for the OpenStack Juno release.
In the previous post, I covered some of the basic concepts behind network overlays, primarily highlighting the need to move into a more robust, L3 based, network environments. In this post I would like to cover network overlays in more detail, going over the different encapsulation options and highlighting some of the key points to consider when deploying an overlay-based solution.
A blog post I wrote for Red Hat Stack on what’s coming in OpenStack Neutron for the Juno release.
People don’t like changes. IPv6 could have help to solve a lot of the burden in networks deployed today, which are still mostly based on the original version of the Internet Protocol, aka version 4. But time has come, and even the old tricks like throwing network address translation (NAT) everywhere are not going to help anymore, simply because we are out of IP addresses. It may take some more time, and people will do everything they can to (continue and) delay it, but believe me – there is no other way around – IPv6 is here to replace IPv4. IPv6 is also a critical part of the promise of the cloud and the Internet of Things (IoT). If you want to connect everything to the network, you better plan for massive scale and have enough addresses to use.
The IT industry has gained significant efficiency and flexibility as a direct result of virtualization. Organizations are moving toward a virtual datacenter model, and flexibility, speed, scale and automation are central to their success. While compute, memory resources and operating systems were successfully virtualized in the last decade, primarily due to the x86 server architecture, networks and network services have not kept pace.
subscribe via RSS