Quantcast
Channel: IDBS
Viewing all articles
Browse latest Browse all 161

Smart Labs: Getting the data right, part 1

$
0
0

So much gets written about the woes of R&D. It’s time to stop. Think. Act. That’s our message at this year’s IQPC SmartLab’s conference in Munich (#SLABx) this week. Getting the data right inside today’s ‘SmartLab’ enables smarter enterprise R&D tomorrow and in the future.This first of two blogs kicks off today with integrating legacy systems and managing secure data flow, based on the infographic released by Smart Lab Exchange.

R&D (#R&D) creates and uses data assets. Like any manufacturer it needs to understand where its assets are, how good they are and how to put them together to make a quality product.The advantage of data as a product is that it can be used, reused and repurposed again and again.

The smart principles of R&D data are straightforward:

  • Capture data with context
  • Make sure you understand the provenance of the data
  • Structure it so that it can be combined and consumed by decision-makers in their own way

Straightforward to say but to achieve this data manufacturers need to think and act for the long term, not just immediate ROI.

Integrating Legacy Systems

Some legacy systems,in-house or COTS, form parts of a fixed process that does not need to change. These systems act as feeders for a foundation of quality data and should be retained. They can be driven by other systems such as process ELNs (#ELN), and their data harvested for use elsewhere. Integration through RESTful web service APIs is the most flexible approach but where the technology can’t be applied, bespoke integrations can still be of value to source the right data and context.

However, where a legacy system – particularly one of the many legacy in-house solutions – is standing in the way of progress,the smart approach is to retire them fast and get their function replaced by a cross-domain, multi-process application, such as E-WorkBook. There is no future in making a data compromise for expediency if the net result is poor quality data.

Managing Secure Data Flow

In a today’s multiparty R&D environment it is vital that data, process and context are stored securely. But what does that mean? Firstly, it’s about the individual: their profile, group and role should define what they can do, and what they can see.  Secondly, it’s about data provenance: all data should have audit trails from capture, through any modification, all the way to consumption.

This approach is standard practice in regulated environments but the principle applies everywhere, even if you do not need the GxP rigour at the bench. Thinking about security and audit right at the start is smart because it is too hard to retrofit down the line. If you get the security and audit right at the early stages, then whatever workflow and orchestration tools you have can be used to traffic data to the right place and in front of the right person.

My next blog will look at managing relationships with external partners, the lack of standardization and how data quality is key, because contextualized data underpins everything.


Viewing all articles
Browse latest Browse all 161

Trending Articles