Tech Transfer Central
ARTICLE REPRINT

Improve TTO benchmarking by normalizing data and taking a deeper dive behind the numbers

This article appeared in the August 2020 issue of Technology Transfer Tactics. Click here for a free sample issue or click here to subscribe.

Don't forget to sign up for Tech Transfer eNews, our free email newsletter filled with helpful tips, industry news, special reports, and key legal and regulatory updates, sent to your inbox every Wednesday!

You'll also receive info on upcoming webinars and other related products.

TTOs must understand their own performance data if they are to optimize outcomes, but it also is critical to normalize the data to obtain a useful comparison against other universities that allows you to identify potential areas of improvement.

Tracking data is vital not just for benchmarking against others but also for understanding the effectiveness of the tech transfer program and staff, stresses Laura Schoppe, founder and president of Fuentek, a tech transfer consulting company in Cary, NC. She served as 2011-2013 VP of Strategic Alliances for the Association of University Technology Managers (AUTM) and was the chair of AUTM’s Global Technology Portal Committee. She currently serves as the chair for the board of the AUTM Foundation.

AUTM survey data is the most comprehensive information a TTO can use to compare its performance to that of peers, and TTOs can compare their own metrics against those of other institutions to determine if they are overstaffed or understaffed, paying too much in legal fees, or patenting too much or too little, Schoppe explains. But it’s important to make sure the data is normalized so comparisons are meaningful when compared against true peers.

“There is some expectation that there is a norm out there. On the aggregate if you look at what others are doing, there should be some normalization and averaging,” Schoppe says. However, comparing the metrics is not as simple as putting your data alongside that of the best performers. The data must be normalized so that you are getting a useful comparison, rather than what might be a comparison against a TTO in significantly different circumstances, Schoppe says.

For example, “you don’t want to compare yourself to schools that have medical schools if you don’t have a medical school. The metrics are quite different,” she explains. “You want to pick schools that are similar to you [in] research expenditures, because it’s not reasonable if you are a school that is bringing in $200 million a year to compare yourself to a school that is bringing in $1 billion a year. You want to get schools that are above and below you, but reasonably close.”

You also can identify two peer groups — one that most closely resembles your own circumstances, and an aspirational peer group that represents the schools that your administration or your TTO would like to emulate. The first may provide the most accurate comparison to see where you stand, but the second may help identify tactics that could improve your performance, Schoppe suggests.

Also watch for schools that may have experienced a one-off year that is reflected in the data. If a school hit a home run with a license, its royalty revenue may be hugely out of whack with the average reported by other schools and not a fair comparison to your program. “If you have an anomaly like that, you have to consider it the same way and not use that one outlier as evidence of how well your program is doing or should be doing,” she says.

Schoppe and her colleagues recently worked with one university with a huge spike one year and it turned out that they had monetized a license that skewed their data. That became a problem for the TTO because university leaders looked at that data and expected them to repeat it every year.

“It is important to identify those anomalies within your own data and make sure administration understands them, without expectations that those are repeatable. You have to manage those expectations,” Schoppe says.

Getting to normalized data

Once you have identified peers and removed anomalies from both their data and yours, you can proceed with normalizing and comparing the data, she says. The data should be normalized with a series of different parameters, she says.

For instance, you may use the number of invention disclosures divided by research expenditures. That normalizes the output, producing a ratio that accounts for the different research expenditures among schools, Schoppe explains. Another slice of the data may look at disclosures by number of office FTEs, or by number of licensing associates (see Figure 1). “AUTM data has shown an expectation of between two and four invention disclosures for every $10 million in research expenditures,” she notes. “Four is the ideal — what you want to be shooting for. If you’re at one, that’s a red flag.”

The same kind of normalization can be applied to data on licensing revenue or any other data point you wish to consider (see Figure 2).

If you find yourself below the norm for AUTM or your peer group on any metric, that is the time to dig deeper into your own data beyond the numbers you provide to AUTM, Schoppe says. Look closer at the numbers by each college or department, she advises. For example, look at the research expenditures in the chemistry department vs. the number of invention disclosures. How does that compare to the physics department?

“When you start analyzing by department, you’re likely to find where the problem lies if you are below average. You’ll see that in certain departments you are not hitting your numbers,” she says. “That can help you focus and determine where you need to go in and do some training. If your chemistry department is the best funded in the school but you’re way low in invention disclosures, you can pinpoint your efforts and get your tech manager who specializes in chemistry to work very closely with them.”

Another issue that may become clear when you compare your data to your peer group is having a lot of patents but not a lot of licensing revenue. In this case, Schoppe suggests looking at whether you are patenting a lot for one inventor or one department but not getting much revenue from it. If so, the data might indicate you have a “squeaky wheel” who gets a lot of attention without actually producing much in the end, she says.

“That person might be very productive with research and papers. That’s wonderful and you can acknowledge them in other ways, but spending money on another patent that is not going to license is not efficient for you,” Schoppe says. “The data can help you focus internally to look at whether you are patenting the right technology. It could be that you are not following up with the right marketing and that’s why the patents go nowhere, but it gives you something to think about.”

A deeper dive into your data also could reveal that a disproportionate amount of your licensing revenue comes from one inventor. If so, that means your program is at risk of a sharp decline if that inventor stops producing for any reason, Schoppe notes. The strategy in that case? “Treat that inventor quite well,” she says. You may want to talk to the administration and make them aware that other universities may try to poach them. But even as you protect that inventor’s contributions, at the same time try to nurture other faculty so that you aren’t too dependent on one person’s work, she advises.

Use aspirational peer group

The aspirational peer group can be useful when a university is seeking to improve its overall performance, including TTO metrics. Many university presidents are setting lofty goals for their 2025 or 2030 roadmaps that call for the university to produce twice the research expenditures or twice the invention disclosures, and more Nobel Prize laureates by 2030, for instance.

“When they put forth those grandiose plans, you have to start cranking some numbers,” Schoppe says. “If the president wants revenue that is 2X of where you are, you need to look at your aspirational peer group and see who is at 2X of your output right now. Look at their stats and what is different.”

Don’t expect the other university to have twice the results you have in every category, Schoppe notes. Their invention disclosures might be 1.5 times your own even though their revenue is twice yours, she explains. But their staff might be three times the size of your staff.

“So you might have to tell the president you need a much bigger staff to get the desired results, and more in legal fees,” she says. “You can lay the groundwork for your president to see all the activities that are necessary to achieve those target numbers. It’s not going to be a one-to-one in which you double the numbers in one area and can expect to double the results in another area.”

Be prepared to find answers in the data that you cannot act on immediately. For example, you may find that you are understaffed in marketing compared to your peer universities, but it will be hard to push for increased staffing while universities struggle with the COVID-19 pandemic, Schoppe points out.

The data also could indicate that you are adequately staffed or overstaffed but still producing substandard results, she says. That’s a management problem and not something you may want to communicate to administration until you get your house in order, she notes.

“You want to look at these numbers yourself before you get asked to look at them by your administration,” she says. “When the administration asks to look at the numbers, it’s because they think there is a problem, and there usually is.”

Schoppe notes that all this data management requires at least one staff member who is proficient in Excel and statistical analysis. She points out that the most recent data from AUTM is from 2018, and it will be difficult to make a straight comparison with that data and the current operation of TTOs during the pandemic. Data from 2020 will have all sorts of anomalies, she says, including sharp drops in resources and spikes in innovation related to the pandemic.

Looking at data another way

Data benchmarking provides a pathway for TTOs to learn from the experience of high performing universities, but there’s more than one way to view that data, says Cullum Clark, director of the Bush Institute-Southern Methodist University Economic Growth Initiative and adjunct professor of economics at SMU in University Park, TX.

Clark is co-author of a report from The Bush Institute on the innovation impact of tech transfer, and that focus on impact produced interesting results that are quite different from AUTM’s typical top-line results based on patents, revenues, and start-ups.

The data he and his co-authors put together is meant to highlight the importance of “building efficient and outcomes-focused technology transfer operations, instilling cultures of innovation and entrepreneurship, and engaging with surrounding business and innovation communities.”

The report ranks institutions for overall innovation impact, but importantly it takes a deeper look beyond sheer size to examine productivity in converting research spending to innovation impact output. The authors sought to highlight high-performing institutions “so that other institutions, as well as policymakers and other leaders, can learn from their example,” the report says. (The full report is available online here.)

The top-ranked universities and systems for overall impact were some of the likeliest suspects given their massive scale — the University of California System, the University of Texas System, the Massachusetts Institute of Technology, University of Washington, and University of Michigan topped the list. But when viewed through a productivity lens, a much different picture emerges. The rankings, further parsed by type of institution, are shown in Figure 3.

The analysis included nine measures of innovation impact: patents issued by year, licenses signed, license income, spinout companies launched, licenses signed with spinouts, citations of a university’s papers in other papers, citations of a university’s paper in patents, PhD graduates in STEM fields, and bachelor’s and master’s graduates in STEM fields.

“One of the simplest but most predictive measures is how many employees there are in the tech transfer office,” Clark says. “It is predictive in a statistically significant way, predicting how universities do in terms of our innovation index, productivity, turning inputs into outputs. Properly staffing tech transfer matters.”

Key factors predicting impact

The study’s authors reported these points among its most important findings:

  • While bigger universities with larger research expenditures produce more impact, as expected, in terms of impact productivity larger size actually is predictive of lower productivity.
  • The share of foreign-born people in a metro population has a strong positive association with the innovation impact and productivity of local institutions.
  • Once controlled for size, there is little difference between public and private universities in innovation impact and productivity — a result that contrasts with some studies using narrower measures of innovation impact, which have tended to find greater productivity at private universities.
  • Having a larger TTO staff predicts greater success in technology commercialization and entrepreneurship.
  • Having a TTO director who is a trained engineer is associated with greater innovation impact, whereas TTOs led by someone with business and start-up experience makes little difference in impact.

One of the key takeaways from The Bush Institute report is that while it’s clearly helpful to have a big research budget — size matters in overall impact — it’s actually a predictor of lower productivity. In other words, the little guys get a lot done with fewer resources.

“Clearly some universities are getting a lot more bang for the buck,” Clark observes. “We see that size to a certain degree is the enemy of productivity. The bigger the university was, on average, the less productive it was, probably just because of the larger bureaucracy involved,” Clark says. “That’s a message to the smaller universities that maybe you can’t make yourself a huge institution, but you can make yourself an exceptionally productive small university.”

Clark notes that TTOs often can see a big improvement in productivity by just tracking their own data better, and like Schoppe he advises them to get beyond AUTM data and track individual schools for data specificity. He urges university TTOs to track not just their own data but to think in the sense of benchmarking what you actually care about improving, and manage to your aspirations.

“The vast majority of people at the university, outside the tech transfer office, would have no idea how they’re doing on these measures, despite the fact they often articulate their desires to lead in this area and they tout all the research work they’re proud of,” Clark says. “They could be more disciplined about [measuring] how they are doing, because universities are not so good at that sometimes. We’re preaching a little more business-minded approach to pursuing this aspect of their operations.”

Contact Clark at 214-200-4327 or CClark@bushcenter.org; contact Schoppe at 919-303-5874 or laschoppe@fuentek.com.


About Technology Transfer Tactics monthly newsletter...

Find more articles like this one when you subscribe to Technology Transfer Tactics monthly newsletter. Sign up today and get immediate access to our Subscriber-Only Online Resource Center, which includes the entire archive of TTT back issues (since 2007), as well as our treasury of industry research reports, legal opinions, sample forms and contracts, government documents and more.