Mike Burr
|
|||||||
---|---|---|---|---|---|---|---|
At a Glance...Skills—Language: Python, Rust, WASM/WASI, Golang, Solidity, Web Front-end, scripting, shell Employment History
Self
May 2021-present
Independent Software Developer and Researcher
Took a planned break from traditional employment to focus on personal and professional growth through self-directed learning and contributions to open-source projects. Rust programming language deep-dive: Personal projects including WASM browser-based multiplayer networked Bevy game, open source contributions, challenges, exercises. Miscellaneous mathematics topics: Symbolic polynomial library/utilities, elliptic curve (cryptography and theory) Embedded learning and experimentation: ESP32 — RISCV/Xtensa
Performance Livestock Analytics (acquired by Zoetis): Ames, Iowa
November 2018-April 2021
Software Engineer
VMware: Palo Alto, California
April 2017-June 2018
Staff Engineer
VMware’s hybrid cloud platform group has a complex, mostly-Python, long-used test framework called “Goat”. My role at VMware was to maintain and extend this framework, along with participation in development of related quality infrastructure and tools (e.g. GitLab). Since this product line is available for a wide range of “cloud providers” (AWS, Rackspace, IBM, and more) as well as custom-built bare metal solutions; accordingly, the range of technologies involved was broad and rapid ramp-up was often required. A typical day might have included: feature implementation, bug resolution, assisting test authors in using framework features, build-out of ancillary tools, CI/CD pipeline features, git hooks, conception and realization of various web portals and custom tools, documentation, meetings and more. Along with this came participation in design meetings, Agile meetings, daily stand-up meetings, technical assistance of colleagues and various other in-person interactions to coordinate. Solutions and technologies used include: Python, most of its standard libraries, along with a wide range of 3rd party libraries: Django, Flask, Paramiko, Selenium-Python, NumPy and many others; Ruby, including GitLab customizations and extension of existing Ruby infrastructure; Fast, centralized logging via PostgreSQL, Rsyslog and Nginx, combined with a custom Python web UI for centralized logging (Flask).
Cisco Systems: San Jose, California
November 2012-March 2017
Software Developer; QA
Cisco was in need of a robust test framework for their UCS product line. I started in late 2012 on a project that would become known as “Qali”, a pure-Python testing framework that would ultimately become the standard testing framework for the entire business unit, and still is to this day. Qali models the state of the UCS in-memory, using a combination of 1) reading build-related meta information to understand the products’ structure, including possible hardware and configuration combinations and 2) querying the device itself at test-execution-time to understand its state and populate the model. With this information, test authors are able to write very high-level, readable tests describing the desired end-state or the sequence of state changes needed to execute the test in a nice, intuitive, “object oriented” Python syntax reflecting the hierarchical structure of the product’s configuration. My involvement with Qali and associated projects went from early design and PoC to maintenance and architectural oversight of the entire project years later, after adoption by many hundreds of users spanning several departments.
Triad Semiconductor: Winston-Salem, North Carolina
January 2009-September 2012
IT Manager
The infrastructure required to effectively operate a 100+ person team of mixed-signal ASIC design and testing staff is considerable. Outages, performance issues and other problems become very expensive very quickly. The top-to-bottom administration of this environment was my responsibility at Triad Semiconductor. At the time of my hiring, there were numerous problems and inefficiencies that a busy team of design engineers don’t have the time or resources to attend to. I was able to very quickly resolve many of these, leveraging Linux and OSS, a solution not practical without the required domain knowledge. All of this worked out nicely, and much of it is still in place now. As good example, when I was hired, multiple bare-metal Red Hat Linux servers were in use for circuit simulation (predominately via Mentor Graphics). There was no reasonable way for engineers to reliably understand current and near-term capacity on these hosts. As a result, it was common for hosts to slow to a crawl due to over-use. I resolved this by consolidating these hosts into a common cluster, one of which was used for storage. I exported this same filesystem over a fast, dedicated network, bonding multiple 10GbE interfaces on each host, deployed Puppet for centralized administration and created a web based (Flask) UI that allowed engineers to see at a glance what utilization was, know what jobs are queued and more effectively anticipate utilization. This amounted to much more efficient use of existing resources. It wasn’t perfect, but was well on its way to being about the best allocation/utilization possible of the available hardware without the need for expensive, proprietary solutions. This is representative of the many other projects that I was involved with, from conception to implementation. Some of the other especially fruitful solutions implemented by me include: A fully OSS VPN, using OpenVPN with a custom PKI, allowing for centralized certificate and endpoint management; Replacing costly, proprietary Linux distribution with free, easily maintained Debian variants (mostly Ubuntu); Replacing an expensive, slow, low capacity tape backup system with a centralized over-network, differential, high capacity backup system using hot-swappable commodity eSATA hard disks; Comprehensive site monitoring, including, backup job results, power status, equipment temperatures and humidity using Nagios, Puppet and some custom glue logic; Consolidation and smarter resource utilization using host virtualization (Xen ~ v3.0 at the time).
Greatwall Systems: Winston-Salem, North Carolina
April 2007-October 2008
Senior Systems Engineer
Greatwall Systems was another Wake Forest University spin-off involving many of the same personnel as PointDx. This time I was employee number four. As an information security start-up, Greatwall Systems had special security requirements with regard to both their corporate network and the test network used for product development. As the sole Systems Administrator, both of these responsibilities laid with me. All routing, intrusion detection/prevention, email categorization (spam detection), VPN and firewalling systems were implemented using Debian Linux running on commodity hardware. The development environment, along with various issue tracking and source version control systems were implemented using a heavily customized instance of Trac. Subversion (and later Git, with a transparent migration and no data loss) were employed for version control. As a start-up with a tight budget, all of this was accomplished using fully open-sourced software, deployed, customized and maintained by myself. Again, facing budget constraints and a desire to be as efficient and cost-aware as possible, numerous other projects and responsibilities fell to me. Some of the more interesting ones include: Installing Linux on and adding second network interface support for the Sony PlayStation 3 (PS3) for the purposes of developing support for the Cell architecture; A complete IP-telephony system using commodity hardware running Asterisk and later FreeSwitch, driving an isolated network of auto-provisioned Polycom desk phones (this project ended up costing the company nothing, as we were able to fund all related hardware and expenses by selling the obsolete phone system that we replaced); An elaborate OpenLDAP corporate directory and central authentication system, using a custom Python web UI for administration. These projects and more were entirely my responsibility, from planning and execution to ongoing maintenance, uptime and security. I learned a lot and really enjoyed the sense of accomplishment and the wonderful esprit de corps that comes with such an environment.
PointDx: Winston-Salem, North Carolina
April 2001-April 2007
Systems Engineer
At PointDx, A Wake Forest University research spin-off, I quickly learned the value of working with a small, smart, humble team and enjoyed a real sense of accomplishment. Staring as employee number five, I owned all of the IT infrastructure, including voice, and data. As a medical startup, data and infrastructure security was a very high priority. This also was my responsibility. This involved among other things, encryption of various types of data, robust IPsec-based Linux VPN endpoint, router and firewall (internet-facing), and sanitizing large quantities of anonymous radiology (MRI, PET) patient data used in product research and development. As a staff-of-one IT department, far into the growth of the organization, automation and centralization was a critical part of maintaining efficiency and scalability. This included automated deployment, update and maintenance of Microsoft Windows-based user workstation infrastructure (including use of Samba’s then brand new Windows Domain support.) With only a very few exceptions, all solutions were fully free, fully open source projects, all tied together to provide a secure, “enterprise” experience, despite financial constraints. This included a comprehensive org-wide OpenLDAP deployment, backing various human resources data, providing centralized single sign-on, and various other resources (including intranet DNS data.) EducationBachelor of Science in Mathematics: Gainesville, Florida |