July 15, 2017 marked the one-year anniversary of the divestiture of the Tucson Innovation Center from the fold of a Fortune 100 pharmaceutical company. A twenty-year journey of significant contribution in integrated drug discovery inside a large research organization was about to change. Our skills and expertise were acquired by a smaller organization and we went from a total company population of over one hundred thousand to around one hundred. Following is a condensed version of the choices we confronted and the decisions we made as an IT organization during this transition. My hope is that it provides an interesting read and perhaps holds some useful nuggets for others in a similar situation.
What is “their” standard?
As the team became aware of the impending change, albeit without a knowledge of the potential suitors, the common expectation was of being acquired by a larger organization who would integrate us into their existing structure. “Surely they will have standards and policies that we will have to adopt and adjust to. That’s how things are done.” After it became known that our parent was less than half our own size, it was still the reflex to ask the question, “What do they want us to do?” Fortunately, there were a couple of things that came quickly, including hosted email and a replacement electronic lab notebook. Both of these were functional with the first few days. But, we quickly realized that there would be many, many areas that we would need to tackle ourselves and help define new practices. Our range of scientific disciplines and associated instrumentation and software applications was much broader than our new colleagues based in North Carolina. This wasn’t what many of the team expected, but it did engender a certain excitement around the opportunities that were ahead.
A bit of our IT history
When I arrived at the site in 1996 there were a couple of Windows servers under a desk and a couple of relatively new SGI Unix workstations in a back closet. A Tucson startup had been acquired the previous year during a merger of a US company (Selectide) and a German company, and was now part of a large multi-national company. The site was still essentially autonomous from an IT perspective with all applications and servers present locally. Over time some of the administrative applications migrated to hosted solutions elsewhere, but due to the innovative science at the site, we developed or implemented software solutions that were often unique inside the larger organization. As was typical of many industries our parent went through a pattern of out-sourcing basic infrastructure support, bringing it back in-house, consolidation of data centers and again toward outsourcing of basic support. In the midst of this cycle, we had to repeatedly justify how our requirements were unique and why these couldn’t be hosted remotely at a centralized data center -- unique applications, large data volumes, proximity requirements , etc. This led to our local data center becoming more sophisticated and our move to a new building in 2009 enabled state-of-the art smallish data center equipped to host high-performance computing, enterprise databases, virtual server farms and pretty significant storage. In addition, despite the small size of our team and geographic distance from the major sites, members of the IT team played strategic roles in key application and platform areas. The cumulative effect of this history was that the Tucson site ended up with a sophisticated collection of solutions and infrastructure that couldn’t easily be moved, and a highly competent and experienced team.
Cloud or on-premise
With all the buzz around cloud computing in the past few years and the current realistic availability of hosted solutions, one of the first questions being asked last summer was whether we would pursue cloud based solutions. Remote hosting is attractive to avoid infrastructure capital expense and when the geographic distance between work and server makes little difference. For example, if the needs of a new business are limited to a financials application, email, a specific vertical solution and a productivity suite, cloud hosting for everything is a very attractive solution. Our situation was quite different. Due to our history, we had a significant and updated physical infrastructure already in place. And we had applications that generated such high volumes of data that it would be impractical with current technology to move to cloud storage and still run processes in a convenient, timely and economical fashion. So, we pursued cloud-based solutions where economic and IT administrative benefits were clear (e.g. email, collaboration and purchasing) and leveraged our internal hosting for the rest. As cloud hosting options continue to expand and our infrastructure replacement cycle matures these decisions will certainly be revisited.
Build versus buy
One of the first major decisions confronting us was related to the registration of chemical batches and biological results. Along with sample logistics, these three items are the essential core elements of our scientific data management. Coming from a large pharma background we were aware of most of the industry standard solutions in these areas. Unfortunately, these are typically priced for companies with significant resources and widespread use. We were neither. There were some options that might have been feasible for us, but we realized that we were in a unique situation. We had significant experience in a variety of systems, had squirreled away ideas of how we could do it better if we ever had the chance, and had a uniquely skilled team member. We also realized that we would now be a “partnering” organization with more diverse requirements for logical data partitioning, flexibility and redactability. This lead to a somewhat unusual “build” decision. A few of us had been involved in multiple generations of solutions and we rapidly outlined architecture, identifier and process options and made decisions. Then using strong database architecting skills and intimate knowledge of the business requirements, one of our team members was very quickly able to create a robust framework and a working set of APIs.
The uniqueness and strength of the team’s experience and skill set in the registration system decisions led to one of our early epiphanies. We could do quality work without a broad and complex managed project portfolio, large steering committees, project charters and multiple levels of review and sign off. Those who may have only worked in a small organization might not understand our newfound feelings of agility and productivity. We did have a portfolio of sorts, did assign responsibilities, review designs and document where necessary. The speed at which we could now execute was almost unnerving. I am not disparaging of the former structure and processes of our large company past. I am very thankful for all the valuable training and experience that environment provided to the team. I understand the complexities of a large environment and the need for controls and process to manage accountability, prioritization and end points. But it can be a struggle to be agile when the standard project process is more appropriately designed for very large systems and oftentimes in regulated environments. This nimble aspect of our new environment was refreshing.
Buy versus build
Part of the financial aspects of the change required the purchase of many software licenses. Just as we had a strong build conviction for one area, as mentioned above, there were many more decisions to buy instead of build. In a few areas, our experience and requirements made it an easy decision to choose to renew our existing solution. In others, we were already at a technology crossroads and it was a good opportunity to re-evaluate. We were able to consolidate to a single vendor in a couple of places and are already seeing the benefits of that built-in integration.
Re-wiring while under sail
The buyer and seller in our transaction realized that we would need a time of transition as we migrated our many processes and systems. One of the more challenging aspects was the internal networking and foundation servers. I sometimes compared it to being on a functioning ship at sea, and needing to re-wire the entire ship without any interruption to normal day-to-day operations. Instead of a flash cut, we had to decommission initial pieces of the equipment from legacy support, create a parallel Icagen network, and then migrate servers and applications to the new network. And then repeat multiple iterations. Our network guru had to architect some pretty unique scenarios to enable the transition and meet the strict security requirements involved. A hugely successful milestone was when we migrated all the office-side workstations to a new Icagen Win10 image over a single weekend. This was not an area we had much experience in architecting or executing and, with the help of some excellent external resources, the team did a terrific job of planning and execution.
Our environment has two distinct pieces. On one side of our physical building is a traditional office environment with desks, offices, printers, phones, conference rooms and more traditional office IT. On the other side is a laboratory side with over a hundred computers attached to instruments of various sorts supporting our different disciplines of chemistry, biology, sample management and analytics. This is a challenging area because instruments were acquired over time, with attached computers which are typically difficult to upgrade and keep current. We support a mix of operating systems, going back more generations than we would care to admit. In this environment, it is hard to provide an appropriate level of network protection and still enable fluid and efficient operations for the scientists. In the past, sometimes centralized policies would get cross-wired and instrument computers received a pushed update and automatic reboot in the middle of an important experiment. Or, the standard anti-virus was so resource intensive that it interfered with the already finicky communications between PC and instrument. Scientists don’t appreciate these hamstrings and would sometimes prefer to pull the computer of the network entirely. We reflected on these past experiences and brainstormed new approaches. The team came up with some practical and clever solutions for the lab. These included an effective but light-weight anti-virus, more consistent and less invasive group policies, ubiquitous network attachment and a clever account logic login, security and file sharing mechanism. We are also working toward the goal of eliminating portable USB attached storage devices. The IT team believes we can get there but we need to demonstrate the effectiveness and lack of disruption of these new practices.
When we were drawing up the plans for the transition, those with experience were suggesting a twelve- month period. I thought that seemed excessively long and fully expected that we could be done in half the time. After all, the entire, very experienced IT team was staying together for the adventure. We plunged in, making project plans, prioritizing activities, and all the while keeping all the systems operational. There was so much to do. Who knew it could be so complex? Some choices were delayed by new opportunities to be researched and every now and then a wrench would fall into the gears and require a rethink. The technical journals had lots of examples of starting from scratch or shutdowns with high-speed cut-over. But we needed to keep working while unmeshing the intertwining of local systems and applications with globally critical systems hosted elsewhere. There didn’t seem to be any examples or blueprints of what we were trying to do. Fortunately, company management was understanding, very supportive and didn’t apply unreasonable expectations or deadlines. So we persevered, collecting and celebrating small and large victories, all the while switching to plans B and even C as the landscape evolved.
The final push
We were nine months in when we started planning a distinct timeline for the final cut-over. We set June 1 as the date for the full network separation, ten and a half months into the journey. Now we a specific deadline and although it could become a soft deadline if necessary, we were going to strive to achieve it. We began having daily operational meetings to review priorities and scheduling. Typically, over the past years, the different team members had quite independent activities– desktop support, lab engineering, system, network and database administration, etc. But now we were all pulling together and giving part or all of our time to the final cut-over preparations. Fortunately, the team dynamic was excellent. These final activities were mostly around the network migration and application cleansing of the lab environment and since we had active projects ongoing this was a pretty demanding task. At first, the deadline didn’t seem achievable, but with each week our methodology became more efficient and we began knocking out the more challenging lab sections. A public holiday weekend in May was just before the cut-off and a couple of us were preparing to come in for a final push. Then we were at the Friday before, and only a few machines were left and could be wrapped up after the weekend. We enjoyed the three-day break, came back and almost leisurely finished the final tasks. The cut-over came and went without perception by 99% of the site. Obviously except for the network and sys admin!
Wow, we actually made it
There were still a few systems being brought back up, but this major peak had been crested. Most of the team took some well-earned vacation days in June and now we have begun to focus on the important, but not as urgent, activities that had been on hold for a few months. We are privileged to continue building on an excellent IT foundation for an expected exciting future for Icagen in Tucson.
P.S. Many thanks to the local IT Team, to local Icagen management and our IT colleague in North Carolina. And great appreciation to our numerous and missed former big pharma colleagues (and still friends).