Starting with "lower risk", what Reid is really saying is less risk of overrun in costs due to poor estimates at the outset. However, from a data integrity point of view, using already existing systems, which are known to hold inaccurate data, and are accessed and updated by separate groups under different protocols, is actually an increase in risk.
There is more potential for information leakage, failures, and, most fundamentally, a wider net of loosely controlled human beings affecting data that could significantly impact on people's live if mistakes are made. Let's not be under any illusions here, mistakes will be made. I don't say this for political reasons I say it in operational terms.
Large-scale databases, especially those that carry out masses of transactional queries and updates will always have problems. "Fixing" data is a necessary fact of operational life. Place that reality in the context of multiple databases under multiple theatres of control, and you have a very risky situation indeed. Especially when it is about a card that, if introduced, will apparently become the de facto point for all manner of access to services, and other general day-to-day living.
The idea that such a system will also be more "efficient" is, to say the very least fanciful. I imagine that the argument is based on the notion of efficiency when related to data gathering. Why gather lots of data you already have in other databases that are already able to be easily queried? However, the problem of efficiency, again from an operational understand is highly questionable.
Again, the disconnected management of these discreet systems to be used, means that multiple layers of bureaucracy will stifle operational administration. In Reid's proposal, when a mistake occurs which effect data integrity, the process for rectifying it will be wholly inefficient and laborious. The consequential impact on those who mistakes impact could potentially be massive.
Take for example if the ID card becomes a requirement for receiving medical treatment. What happens when data integrity is lost for someone who can then no longer be managed through the pervasive all seeing system of the state? What happens, when due to a failure in one system, a person becomes effectively a non-person for a period whilst the bureaucracy grinds on between the different stakeholders to rectify that situation?
Finally, one has to assume that when Reid claims the system will be "faster" he is only referring to the idea that they can get it all up and running within their given deadlines. However, there cannot be a serious argument that the system itself, once running will be faster when it will make multiple access and query requests to multiple databases in multiple locations across saturated bandwidth networks? Throw on top an overhead for encryption (which one assumes must be planned), and it's pretty clear that this system will be anything but faster than the original proposal.
There is of course a little bit of political trickery and triangulation going on here as well though. After all, opposing the Reid U-turn proposals on the grounds that it will not be lower risk, efficient or faster than the original plan, suggests, consequentially, that one is supporting the original plan - this is not necessarily so. There is also something else we should note in Reid's comment though. He claimed that "[d]oing something sensible is not necessarily a U-turn".
Putting aside the absurdity of this argument regarding the U-turn not being a U-turn, these proposals are not - it seems - actually sensible at all. The system that will be produced now, will - if it ever manages to become operational - be high risk, inefficient and slower. There is nothing sensible about Reid's statement, but then that doesn't surprise me in the slightest. He's a politician trying to talk about IT without having a clue about what the real implications are.
Hat Tip: The Spine for the image