Dr Nicholas Bevan

Dr Nicholas Bevan
www.nicholasbevan.com

Sunday, 3 February 2019

Answer to Consultation Qs 6 & 7

Law Commission Consultation on Automated Vehicles

Consultation Question 6 (Paragraphs 3.80 - 3.96)  


Under what circumstances should a driver be permitted to undertake secondary activities when an automated driving system is engaged? 

Answer:  

I note that the refence to ‘driver’ in the context of a vehicle operating an ADS does not necessarily exclude highly and fully automated vehicles contemplated by s1 AEVA 2018 as even vehicles with full automation are likely to have a manual override option.  Even so, I infer that this question is primarily intended to refer to vehicles whose dynamic driving task is controlled by ADS of existing and close to market levels of automation (i.e. driver assistance (SAE L2) and conditional automation (SAE L3)) because where a vehicle is driven under normal manual control, existing civil liability rules and safety standards (which prohibit distractions) would apply.

There is no simple answer that allows for a one size fits all regulation.

The issue is a nuanced one that depends as much on the vehicle’s technical capacity as on its ODD (i.e. its operating environment/context of use); the appropriate international standards, such as the UNECE conventions (currently undergoing review), as well as the minister’s discretion under s1 AEVA 2018 (and / or any ADSE criteria).

On a general point, there needs be a close and clear correlation between physical standards expected of a ‘driver’ and the level of sensory sophistication and processing capacity of vehicle automation.

The present general proscription on distractions should apply (modified only to permit viewing screens to undertake remote self-parking, listening to the radio etc) for all vehicles with ADS below SAE Level 4.

It is important to emphasise that the government’s limited terms of reference set out at Appendix 1 appear to exclude the present and urgent need to regulate existing SAE L2 automation that is already on our roads (not to mention the SAE L3 automation that is close to market).   It almost goes without saying that no distractions should be permitted where a driver is actively monitoring a vehicle running in a driver assist mode in a vehicle equipped with SAE L 2 automation.  The driver’s attention should always be focused either on the road ahead or in undertaking peripheral road safety observations.
A user-in-charge of a vehicle equipped with SAE level 4 (and possibly also SAE L5) should never be expected to intervene in the capacity of a ‘fallback driver’, - so secondary activities should generally be permitted to encourage a general state of alertness, except perhaps in certain relatively high-risk environments, such as locations shared by pedestrians.  Where a user-in-charge is required, the individual must always remain conscious, sober and able to drive, even if full attention is diverted away from the DDT whilst the vehicle’s ADS is actively engaged.


Consultation Question 7 (Paragraphs 3.80 - 3.96) 

Conditionally automated driving systems require a human driver to act as a fallback when the automated driving system is engaged. If such systems are authorised at an international level:  
(1) should the fallback be permitted to undertake other activities? 
(2) if so, what should those activities be? 

Answer:  

Much turns on what ‘a request to intervene’ involves see paras 2.09(4); 2.17) and the intrinsic reliability and responsiveness of the ADS:  e.g. whether a safety critical intervention is needed in the face of an emergency or only as part of a more sedate handover.

This question features close to market automation that the government has indicated is unlikely to be classed as an ‘automated vehicle’ within the meaning of sections 8 (1) and 1(4) of the AEVA 2018 by the minister.  See LCCR 240 para 2.56 and footnote 83.

Q 7 (1) Yes, as a basic proposition but one that is subject to certain qualifications. This qualified permissive approach reflects the following:
First, that human intervention in this context [where the DDT is not being monitored actively] is unlikely to prove to be a reliability effective counter-measure for any ADS failure or other exigency that requires urgent or near instant human intervention.  Fall back drivers are unlikely to be in a position to respond with sufficient speed to address a serious system failures in fast moving traffic on dual carriageways and motorways.  Neither would a fall back driver be likely to be able to respond sufficiently promptly to a burst tyre or to correct loss of control caused an oil spillage at 50 or 60 mph on an A road.  In both examples the human factors considered, at#3.7; #3.12; #3.85 and Appendix 3 apply.  The point here being that secondary activities are unlikely to have a consistent causative effect on the safety of the driving.
Second, that any ADS system that requires, for safety reasons, that a driver either actively supervise the vehicle’s dynamic driving task or requires a fallback driver to respond promptly to a system alert or some other unforeseen problem when operating in its ODD - ought properly to be classified as an SLA L2 vehicle.  My suggestion is that the fallback driver’s intervention should never be depended on for safety critical interventions, for reasons already alluded to.
Third, this level of automation is premised on a yet to be achieved level of technical sophistication and the ability, through software programming, that enables the safe operation of SAE L3 automation, in almost any context, without requiring an instant response by the fallback driver to intervene with a safety critical intervention.  Conditional automation should only be licensed for use save in highly specific low risk ODDs.

To elucidate further, approval of SLA L3 conditional automation should be made conditional on the following:
(i) The first point incorporates the aforementioned concern, namely that the type approval of conditional automated vehicles must first be contingent on it being officially established that each variant of SLA L3 vehicle automation (ie. each model, class or type) is sufficiently safe and reliable for use when deployed within its operational design domain – independently of any human intervention, excepting the safe and preplanned engagement and disengagement of the ADS.  I understand that this level of operational reliability has yet to be achieved for normal every-day road use.
(ii) The safety of every vehicle model should be certified by an independent body (perhaps the ADSE agency proposed by the LC) applying internationally recognised criteria and this needs to be undertaken to an equivalent standard as in the aviation industry.  Manufacturers should not be allowed to self-certify their products.
(iii) Type approval should be restricted for use within specified low risk environments that comprise their operational design domain ODD.  The ODD may factor in weather and other road conditions as well as road types and locations near playgrounds and school etc.  I would envision the geographical ODD of SLA L3 vehicles will need to be restricted initially to dedicated lanes along non-pedestrianised routes where the risk to vulnerable road users (pedestrians or cyclists) is minimal or non-existent or at very low speeds along clearly designated routes (similar to tram lines) and be accompanied by audible and visual signaling to alert other road users that an ADS is engaged. Paradoxically, a vehicle ostensibly equipped with SAE L3 automation might be capable of qualifying as highly automated (at SAE L4 ) and / or of being listed under s1 AEVA 2018 as an automated vehicle that is subject to the direct right conferred by s2 of the Act  if its ODD is so highly restricted that it is safe to operate without a fallback driver because no human intervention is required to meet a safety critical contingency.
(iv) Each vehicle model should be hard-wired to prevent the ADS from operating outside its certified ODD and unless such use is lawful for that model and the change has been preplanned and authorised by the insurer.  It should not be possible for the owner / user to override the vehicle’s ODD (outside any predetermined permissive parameters) without first obtaining authority through an officially sanctioned reclassification process.
(v) Every journey where ADS is intended to be used should be preplanned / preprogrammed whilst the vehicle is stationary and the system connected to online traffic reports, safety-critical software updates and weather data: either before embarkation or at an en route parking point (where a change in destination can be programmed). This ADS should not otherwise engage.  Clearly the ADS will need to have the capacity to alter its route, during a journey, in response to weather and traffic conditions, etc.
(vi) All secondary activities should be banned or suspended (e.g. radio) in the moments leading up to the hand-over to human control as the vehicle approaches the end of its geographical ODD.  This should be enforced by on board haptic and visual sensors and vehicle-use data that must be disclosed to the vehicle’s insurer.  Inappropriate or illegal use could result in higher premiums.
(vii) Section 2 of the Automated and Electric Vehicles Act 2018 should be amended to include these transitionary levels of automation (SAE L2 & 3), especially if it is decided that an SAE L3  fallback driver is needed to perform a safety critical intervention in an urgency.  This is necessary because the Government has already indicated that the minister is unlikely to exercise his wide discretion under s1 AEVA so as to classify conditional automation as an ‘automated vehicle’ because it cannot achieve a minimal risk condition: see #2.56 and 3.25.  This is unsatisfactory because this transitory level of technology (at SAE L3) seems to be inherently less safe and susceptible to misunderstandings as to its capability and reliability than the futuristic levels of automation envisioned by some SAE L4 and all SAE 5 automation that qualify for s2’s direct right.  Indeed, the government’s actions could be equated to a shipping line that equips its vessels with life rafts that can only be deployed in fine weather.

The public interest reasons for imposing strict liability under s2 AEVA for high and advanced levels of automation apply with equal force, if not even more so, to driver assist (SAE L2) and conditional automation (SAE L3), especially if there is reasonable cause to believe that the poorly understood human machine interaction involved at these levels make them comparatively less safe than the futuristic levels of near-autonomy promised by SAE L5 technology.  Users and third-party victims are entitled to expect the same level of civil law protection from the risk of loss or injury from automated transport, whatever its type classification or sophistication.  This is not provided under the existing common law and statutory framework for product defects in vehicles equipped with SAE L2 and 3, see below the response to Q18.  This lacuna needs to be urgently addressed.

Driver assistance technology

Whilst it is appreciated that comments are not necessarly invited on SAE L2 automation, the dividing line between driver assist and conditional automation, particularly when it comes to technical safety standards and what is meant by being ‘receptive to a handover requires or to an evident system failure’ means that the distinction is something of a grey area.

If the safety of a vehicle equipped with SAE L3 conditional automation depends to any appreciable extent on a fallback driver being ready and able to promptly intervene in the dynamic driving task, independently of any ADS generated alert or call to action, then this could downgrade its classification to SAE L2.

One major drawback with partial automation at SAE L2 concerns the speed and effectiveness of a driver’s intervention should it be needed and some of these vulnerabilities are alluded to at #3.7; #3.12; #3.85 and Appendix 3, 32.  This exposes the wider road using public and the consumers to an appreciable risk of loss or injury.

It stands to reason that even a conscientious and experienced supervising driver’s reaction will be delayed when compared to that of a normal driver.  This is because a SAE L2 supervising driver is not reacting to the danger as it is first perceived but to a later event; which is inherently dangerous.  It is reasonable to hypothecate the following sequence: first in time, there is the supervising driver’s realisation there is a danger or potential hazard ahead; next the gradual appreciation that the ADS has not responded appropriately (along a cognitive continuum ranging from the phenomenon emerging as a suspicion; to a its appreciation as a possibility; to a probability, then a near certainty) and during this process, the supervising driver reaching the conclusion that an intervention may be necessary.  Further time may elapse if there is an emotional reaction, perhaps due to a momentary disbelief or hesitation where the ADS has previously operated faultlessly over a prolonged period.  Accordingly, it would appear that the recommended thinking times and stopping distances within the Highway Code are no wholly appropriate for present levels of partial automation at SLA L2.
There are a number of technical solutions that could at least partly mitigate the ‘full attention deficit’ / problem with driver distraction but it is unclear to what extent (if at all) these are adequately regulated, still less enforced, by the UK authorities.

It should be a matter of real concern to any responsible government that thousands of vehicles equipped with SAE L2 automation are already in use on our roads and there appears to very little evidence of adequate regulation or independent testing.  Nor does there appear to be any official consumer and user guidance on its safe use, nor any training or testing of driver / supervisor proficiency in the new skills needed to operate these systems safely.

I am concerned that the regulation of existing driver assistance technology is inadequate.  I offer one example.  If one clicks through the link to ‘How Tesla’s autopilot system works’ in the BBC website referred to in the LCCR 240 at footnote 117 one comes to a video presentation by Mr Musk, the Tesla CEO.  At 01.29 he explains that the vehicle’s ADS depends on its ‘long range ultrasonic sonar’ for its rear view. He does not give any indication of its actual range but the graphic suggests it is short. I am informed by those better qualified to comment that ultrasonic sensors are relatively crude sensors have a very short range and are mostly commonly used for parking.  Even if we take Mr Musk at his word and accept, for the sake of argument, that he has adapted this to extend to 10 or 20 metres, this is clearly and obviously (i) far less that is required of a human driver and (ii) inadequate for dual carriageway or motorway traffic where there could be closing speeds of up to 40 mph that equates to 17.88 meters per second.

I suggest that there should be a basic principle that no ADS should be permitted to rely on sensory input inferior to that required by law of a human driver in equivalent circumstances.
I believe, notwithstanding the inherent vulnerabilities indicated above, that SLA L2 automation still has the potential to save many lives.  One need only Google ‘Tesla Autopilot Prevents Crash Compilation’ to view a compelling if tendentious UTube video to see why this intermediate level of automation might prove attractive to night drivers, high-mileage drivers, commercial drivers and others.  However, unless the Government takes urgent action to ensure that consumers and users are properly informed, trained and tested and that manufacturers equip the vehicles with fail-safe systems that ensure that supervising drivers are consistently and actively engaged in monitoring the dynamic driving task, that if any human intervention is depended upon for the vehicles’ safe operation, then that intervention will be causatively effective in preventing any danger, then they should be banned on the ground that without these fail-safe measures, the ADS is intrinsically unsafe. Manufacturers should be required to guarantee that an ODD cannot interfered with or altered by the user so as to present a danger or otherwise breach national road traffic laws.

Q 7 (2) Consuming cold drinks or snacks, passive activities such as reading, listening, conversing, phone use should be permitted in vehicles with conditional automation but phone use should be remotely connected via an onboard phone consul and any reading managed by the car’s systems and positioned so as to minimise distraction from the view ahead).  No typed emails or texts should be permitted, nor online shopping or form filling or other tasks that require concentrated undivided attention.  In short most activity that can be stopped almost instantly and which leave the fallback user in the driving seat, correctly positioned and orientated, ready and able to respond to the call to action should be permitted.

The urgent and immediate need to reform existing civil liability rules for transitional forms of automation.

I refer to my response to Q18 on the urgent and compelling need to reform the civil liability and insurance provision for driver assist and conditional automation at SAE Ls 2 & 3.  In my opinion, it might be easier for all concerned if the definition of ‘automated vehicle’ in s1 was amended to include all forms of ADS, including partial automation at SAE L2.

In my view, the government appears to have missed a rare opportunity to persuade the motor insurance industry to accept a root and branch reform of the muddled and inconsistent statutory provision in this area (especially Part VI Road Traffic Act 1988) to bring its protection into line with the minimum standard of compensatory guarantee that applies on the continent.  The opportunity arose from the high probability that the prospect of highly and fully automated vehicles (with its resulting public and product liability implications) amounted to an existential threat to motor insurers, whose business model is based on the personal fault-based liability of vehicle users.  Yet the motor insurance industry has effectively secured a monopoly to underwrite the product liability risk of advanced futuristic vehicle automation.  This advanced technology is likely to present a relatively low business risk for motor insurers: due in part to the insurer’s statutory right of recovery and partly to compelling commercial pressures on manufacturers to offer a Volvo style no-fault compensation guarantee in return for low premium rates.  The government appears to have allowed the motor insurance industry to deftly evade the real and pressing need to provide mandatory cover for injury or loss caused by vehicle defects of existing and close to market technology that conceivably present an even greater risk to road safety than the more sophisticated levels of automation contemplated by the AEVA 2018.

The government risks being accused of mapping out the legislative framework of the relatively safe high ground of highly advanced automation but failing completely in its responsibility to attend to the urgent task of mapping the treacherous path leading there.

No comments:

Post a comment