SlideShare a Scribd company logo
Misbehavior handling
throughout the V2V system
lifecycle
Virendra Kumar, Jonathan Petit, William Whyte
Overview
• This presentation does not present any particular results, but is intended to help think about how
MBD will work within the system in the long run
• Present a decomposition of misbehavior-related activity into four parts that can be considered
independently
• Local Misbehavior Detection / Reporting / Investigation / Revocation Decision
• Rather than “What’s the best misbehavior [XXX] algorithm?”, ask “What do we do with the fact that
the best available misbehavior detection algorithm will differ?”
• Best known algorithm will vary from time to time
• Best locally implementable algorithm may depend on the sensors available on a vehicle
• Which of these algorithms need to be public? standardized? uniform?
• Administrative considerations: privacy implications, incentives, oversight
• What’s the right group to think about MBD going forward?
• How can the future introduction of V2I systems into misbehavior reporting processes improve MD performance
overall?
• Apologies in advance: we haven’t seen the other presentations so some of this material may be
redundant.
Who are we representing?
• Ourselves!
• Work not funded through IEEE / CAMP / NYC Pilot
• … therefore, nothing we say represents any other party’s
opinion
• … also, we can say “OBEs should do this” without worrying
about self-certification etc.
• … and we can talk about physical enforcement of revocation
(i.e. police stopping revoked cars) without making it seem that
OEMs (or anyone else) are encouraging it
Proposed conclusions (spoilers!)
• Two possible approaches
• Open garden approach: many different vehicle-side approaches to MBD
• Good detection, privacy risks for reporters
• The MA knows considerable information about the reporting unit
• Sensors, version of detection/reporting algorithm run, etc
• It’s okay for only a small percentage of vehicles to have the best misbehavior detection / reporting
• Privacy against MA must be managed
• Uniform approach
• A low number of distinct approaches to MBD
• Fewer privacy risks to reporters, incentives to report are less of a concern
• … but risks throwing away useful information
• Recommendation: follow open garden approach to the greatest possible extent
• In a world where vehicles may have a range of different capabilities, it’s not clear
what metrics to use to evaluate success of MBD
• These should be developed
Proposed conclusions (2)
• On the assumption that:
1. There will be relatively low numbers of attackers
2. There will be relatively low numbers of honest-but-malfunctioning
devices
… the system seems likely to be robust.
• But it is not clear that we know much about assumption (2)
• Closer study is warranted
What is misbehavior?
• Different types of anomaly:
• Message structure validation
• Message frequency
• Message content 'plausibility'
• Focus here is on message content plausibility
• Note that examples where a private key is known to have been
extracted allow us to bypass misbehavior detection and go straight
to revocation
• Galois has implemented verifiably correct BSM parser, we
recommend the use of verifiably correct software
• DOS attacks can be carried out with just a radio, no need for a cert
 best way to address is to allow devices to report ”channel
congestion in area X” so it can be investigated in the real world
Possible goals of MBD
• Reduce false alerts
• Remove malicious vehicles from the system
• Via revocation or physical activity
• Notify malfunctioning vehicles that they need maintenance
Cost points in the system
• Vehicle equipment
• Sensors
• Processing power
• For real-time data analysis
• For real-time crypto operations
• For background crypto operations
• Storage for certificates
• Storage for CRLs
• Storage for other data
• Maps, …
• Connectivity
• Off-vehicle
• Upload capacity for MBD
reports
• Download capacity for CRLs
• Storage for MBD
• Processing for analysis and
investigation (computer and
human)
Background: two misbehavior
scenarios
• “One lane off”
• Innocent misbehavior
• … but could cause forward
collision warning in all overtaking
vehicles
• Active malware
• Aware of MBD algorithms and
tries to bypass
• Impact:
• If messages can be confirmed by
other sensors, very hard for
attacker to create e.g. FCW false
alerts
• DSRC message is just one sensor
input, can be overruled by others
• Key DSRC scenarios:
• Intersection Movement Assist
(IMA)
• Extended Electronic Brake light
(EEBL)
• Impact analysis will emphasise
the key DSRC scenarios
Revocation
Investigation
Reporting
Stages of the MBD process
• Local Misbehavior Detection
• Determining which messages to ignore
• Reporting
• Determining which messages to report
• If there is a time delay between observing
and reporting, deciding which messages to
store
• Investigation
• How the MA goes about determining
whether two or more certificates belong to
the same vehicle
• Revocation
• What are the criteria for revocation?
• Some are easy – e.g. same cert used in two
different places at the same time – our focus
is on revocation based on the message
contents
• PICTURE HEREDetection
Vehicle
MA LA
PCA, RA,
CRLG
Local misbehavior detection
Local misbehavior detection v
reporting• Starting premise: system capacity costs money
• … for MBD reports
• … for MBD storage
• … for MBD analysis
• … for CRL distribution
• What can we say about using that capacity efficiently?
• Prioritize reports that report messages that were not
caught by local misbehavior detection
• By our definition of local misbehavior, those messages
were already filtered out
• … but this works best if misbehaving messages will be
locally detected by all vehicles
• If LMBD is assisted by sensors or processing power
there’s a tradeoff
• Cost on car v system cost
• Proper prioritization of reporting is helped by knowing
what the baseline local detection capability is on other
cars
• Proposal: there should be some minimum performance
requirements for LMBD, just as there are minimum
performance requirements for sending BSMs
• NOTE: RSEs can also do MBD
• May be better than OBEs – up to date, no privacy
concerns
• In known locations so malicious attacker can avoid them
• Could deter misbehavior in the first place
• NOTE 2: DSRC-equipped cellphones could report MBD
– this is something the system should support but not
investigated further here
Local misbehavior detection
• On the boundary between “security”
and “correct application operations”
• Similar to “plausibility checking” – is
there any real difference?
• Filter out bad messages before they
can cause alerts to be raised
• Entirely local
• Can be OEM-specific
• Can be private
• Can depend on other sensors / etc on
the receiving car
• Bad messages that successfully
bypass local misbehavior detection
can cause false alerts, possibly
worse
Raise
alert?
Good
data?
Good
alert?
Report
?
DSRC
Sensors
Map
No
Yes
Yes
End
Yes
No
No
Yes
Local misbehavior detection examples
and cost
• Basic plausibility checks
• Speed too great, unrealistic turning angle, unrealistic
acceleration
• Can be carried out on a message-by-message basis
• Consistency with sensors
• Are messages consistent with other sensor input?
• Note: not applicable to key DSRC scenarios
• Can be carried out on a message-by-message basis,
only for nearby cars
• Consistency with RF
• Is message direction consistent with RF?
• Approach suggested by Battelle, needs dual antennas
• Consistency between own messages
• Brake status not consistent with deceleration observed
between messages
• Speed not consistent with distance between messages
• Requires storage, processing linear in number of
nearby vehicles
• Consistency with map
• Is trajectory consistent with lanes known to exist?
• Requires storage, processing linear in number of
nearby vehicles
• Consistency with other messages
• Does the trajectory implied by one car’s messages match
the trajectory implied by other cars’?
• Requires storage, processing (kind-of) more than
linear in number of nearby vehicles
• Hand-wavingly, n2 or n log n
• In practice bounded above by kn.
• BSM design is intended to enable
vehicles to make collision
avoidance determination on the
basis of a single message – is this
realistic?
Example techniques applied to example
scenarios: EEBL
One lane to the left
• Basic plausibility checks – not
caught
• Consistency with sensors – not
caught
• Consistency with RF – caught
• Consistency between own
messages – not caught
• Consistency with map –
possibly caught
• Consistency with other
messages – caught
Malicious attacker
• Basic plausibility checks – not
caught
• Consistency with sensors – not
caught
• Consistency with RF – caught
• Consistency between own
messages – not caught
• Consistency with map – not
caught
• Consistency with other
messages – possibly caught
Example techniques applied to example
scenarios: ICW
One lane to the left
• Doesn’t affect validity of IMA alert!
• So does this count as misbehavior at all?
• Basic plausibility checks – not caught
• Consistency with sensors – not caught
• Consistency with RF – probably not
caught
• Consistency between own messages –
not caught
• Consistency with map – possibly
caught
• Consistency with other messages –
probably not caught
Malicious attacker
• Basic plausibility checks – not caught
• Consistency with sensors – not caught
• Consistency with RF – possibly caught
• Consistency between own messages –
not caught
• Consistency with map – not caught
• Consistency with other messages –
probably not caught
Local misbehavior detection:
conclusions
1. The specific use cases that DSRC is most helpful for are difficult to catch
on a message-by-message basis
• Require more processing power than is necessary for a simple assessment of
collision probability
• Maps help catch accidental misbehavior
• Directional RF helps catch some types of misbehavior
• It would be interesting to understand the delta in processing power necessary for
reliable alert generation between a fully trustworthy system and a system with
malicious actors
2. No matter what level of local misbehavior detection is in place, some
false alerts will likely be raised
• Reporting will be necessary
3. Adaptive misbehavior rule updates (e.g., from centralized analytics at
the MA) may have the ability to significantly reduce local processing
power needs.
Local MBD: answers to framing
questions
• Do local MBD algorithms need to be public?
• Yes: allows creation of sensible reporting algorithms
• No: allows OEMs to compete
• Do local MBD algorithms need to be standardized?
• Arguments from “public” apply, but also:
• No: May make it hard to change algorithms if attackers work out ways around the current
one
• Do local MBD algorithms need to be uniform?
• Arguments from “public” apply, but also:
• No: creates lowest-common-denominator approach
• Proposal:
• There should be a minimum standard local MBD algorithm
• OEMs should be free to develop other local MBD algorithms so long as they catch
everything that is caught by the standard
• No need for OEM-specific local MBD algorithms to be public
Self diagnosis (h/t Rob Abramson)
• Carry out local MBD on your own messages
• Can also significantly reduce both reporting and revocation traffic if devices have self-
diagnostics
• Innocently malfunctioning device can detect that its messages are inconsistent with with
other vehicles’ messages (or that it is raising a lot of alerts without seeing reactions) and
• Shut itself off
• Turn on check engine light
• On next key-on can compare messages it “would have” sent with received messages
• (… assuming that innocently malfunctioning device correctly implements self-diagnostics
in the first place)
• Should there be a standard / minimum performance reqts for self-diagnosis, along with
assumption that this exists that informs decision to report?
• Yes: If there are and they’re correctly implemented, reduce need to report/revoke.
• No: Can’t assume they’re correctly implemented, need to report/revoke anyway.
• Proposal: There should be some self-diagnosis requirements but they shouldn’t affect
reporting criteria
Reporting
Reporting
• How important is any single
observation of misbehavior?
• What is reported to the MA?
• Should differently equipped
vehicles be allowed to report
different things?
• What are the privacy
implications?
• What are the administrative
implications?
Basic approach
• Messages that cause false alerts
should be reported
• How to determine false alerts?
• Some metric for expected reaction severity
• If a driver gets the alert and drives straight
on, alert is clearly false
• If a driver gets the alert and hard-brakes,
alert may be true
• Car may report false alerts on self or
observe behavior consistent with false alerts
on other vehicles
• Reporting all alerts catches false alerts
• …But runs risk of privacy violation
• … + unintended use of system for
enforcement activities against drivers
• OEMs may have different alert algorithms,
should they also have a globally agreed
“alert” algorithm for reporting?
• If a message is caught by local MBD, i.e.
if it does not cause a false alert but would
have if not caught, should it be reported?
• If all vehicles can catch the message, no
need for revocation
• ... But reporting does allow misbehaving vehicles
to be notified and/or physically located
• See later discussion
• Also, since this detection depends more on
sensors, its reporting is more privacy-
violating
• Proposal:
• Baseline: everyone reports false alert + no
braking
• Devices with greater filtering ability can
report messages whose falseness is more
ambiguous
• Caught messages should be reported, but at
lower priority than false-alert MBR
Report contents
• Authenticated BSMs (cannot
forge BSM)
• Own sensor data
• A report that only contains
authenticated BSMs doesn’t
necessarily need to be signed
• Privacy protection for
reporters?
• A report that contains other
information does need to be
signed
• Allows deeper analysis but
greater privacy risk
Importance of one observation
• Not all OBEs will have good connectivity, not all OBEs will be able to
connect immediately
• If OBEs connect once every 3 years (~1000 days) then .1% of vehicles will
connect every day
• If 5% of vehicles have always-on cellular connectivity* then 5% of vehicles
could connect every day
• * -- and don’t mind using it for MBR
• Is this enough?
• Claim: yes – we’re trying to catch significant disruption and misbehavior
that affects less than 1000 cars isn’t “significant”
• Implication: 5% penetration of cars with good connectivity is enough to
catch significant disruption
• May be no need for MBR over 5.9 GHz at all
Different local reporting algorithms
• Local: if different between vehicles, how to handle cases where
a vehicle is accused of misbehaving because it was surrounded
by more punitive vehicles?
• Different algo per Vehicle type?
• Algorithm depends on reporting vehicle type?
• Algorithm depends on reportED vehicle type?
• Can reporting algorithm be proprietary?
• If so, can it be protected? Can it be figured out from input and output?
Public, standardized, uniform?
Public?
• Yes: MA needs to know
reporting criteria
• Yes: publicly reviewed algorithm
is more likely to be robust
• No: Malicious attacker can work
out ways around the algorithm
• Proposal: Reporting algorithm
should be known to MA,
despite privacy risk if different
devices use different different
reporting algorithms
• Implication: Need a way to
identify or describe reporting
algorithm
Standardized?
• Yes: Good to have clarity
• Yes: Reduces privacy concerns
• Yes: Avoids mistakes
• No: Slow to change, attacker
may work out way around it
• Proposal: So long as known to
MA, no need to standardize
Uniform
• Yes: Best privacy
• No: Throws away ability to use
sensor information from best-
equipped devices
• Proposal: No need to be
uniform
Reporting and privacy
• Baseline MBR for false alert can contain BSMs from the two involved vehicles
• Any third party can observe that the messages would likely cause an alert and that alerted
vehicle did not react
• (or did not react as strongly as expected)
• Note that this is an argument for reporting messages that would have caused false alerts
if not for sensor input
• … because these messages cause alerts in less-well-equipped vehicles, so the fact that those
vehicles do report those alerts compromises their privacy if others don’t
• Can misbehavior reports be stripped of identifying information when stored on the
reporting device, or when sent to the MA?
• Need to send the original signed BSMs to avoid slander attacks
• To avoid slander attacks via false alert based MBR, the MA must be able to
investigate both BSM senders in the report (this has been known for a long time)
• Innocent reporters in area of high misbehavior might inadvertently have their privacy
compromised
Reporting from more highly-equipped
vehicles
• Should highly-equipped vehicles be entitled to send more detailed misbehavior reports than
vehicles with more basic equipment (sensors, etc)?
• Proposal: Yes, but note argument above that false alerts can be detected from BSMs alone
• Should they be *required* to send more detailed misbehavior report?
• How do they indicate that they are entitled to send more detailed reports?
• SSP?
• Privacy concerns:
• Revealing specific capabilities of car might identify car, or make it unique in particular area
• This would be known only to MA but would create database that could be hacked
• In the current design, BSMs and MBRs are signed by the same certificate
• Therefore, including device-specific MBR SSP in BSM cert would compromise BSM privacy
• Possible resolution:
1. See how much can be done with false alert detection based on BSMs alone
2. Permit more detailed misbehavior reports but…
• Sign those with dedicated certs other than the BSM certs
• As many MBR certs per week as BSM certs
• Possibly have multiple sets corresponding to different capability levels and sign with the lowest capability levels
3. Separate MBR verification from analysis – one component verifies report signature, other uses contents
• Not clear how this would work
A free-rider problem
• For any given OEM the incentive is for their vehicles not to report
• Preserve privacy best
• Free-ride on the reporting of others
• How can this be avoided?
• Keep stats on MBR from different OEMs’ vehicles?
• Unlikely to be popular, even worse privacy properties
• Testing?
• Not clear what conditions to test for
• And bad-faith OEM, which we’re assuming, can detect test environment and give fake results
• Make reporting more privacy-preserving to reduce incentives to free-ride?
• Proposal: Include requirement for reporting in minimum performance
requirements, assume good-faith behavior on part of OEMs, assume that if
one bad actor is caught it’ll scare the others
• incentive to OEMs may be to report Misbehavior in exchange for new/updated
MD rules?
Liability in misbehavior reporting
• Needs to be clarified
• If someone is incorrectly revoked due to reports from another vehicle can they sue
• … the MA?
• … the reporting vehicle?
• Legal status needs to be clear: ideal situation provides strong protection for reporters
• What if there’s a malicious reporting algorithm?
• Ford cars have an algorithm that selects target GM cars and “nudges” the BSM in the report to make
it seem that there’s been an alert with slightly increased probability
• … legal status needs to be clear: some body needs power to punish malicious reporting
• …or simply 'poorly-reporting' algorithm?
• SCMS may need to provide evidence to OEMs to prove that their MD algorithms need to be
modified for improved effectiveness.
• Proposal: VIIC or similar policy body should investigate how policy around reporting
should be made and maintained.
Investigation
Overview
• MA gets a series of MBR
• There are some MBR that feature
the same cert enough times that a
revocation decision can be made
on the basis of those MBR alone
• (see caveat)
• For others, MA needs to ask LA
whether two messages belong to
the same vehicle
• Need to strike a balance between
allowing MA to investigate and
preventing it going on fishing
expedition looking for linked certs
Revocation
Investigation
Reporting
Detection
Vehicle
MA LA
PCA, RA,
CRLG
CAR and WAR
• CAR
• Cluster / Analyze / Resolve
• MA clusters reports to determine which certs
might go together
• Asks LA “am I right?”
• LA responds yes/no
• WAR
• Analyze / Weight / Resolve
• Yes I know
• For each cert, MA assigns a weight
associated to the severity of the misbehavior
reports
• Asks LA “Does any vehicle cross a
threshold?”
• Learns only those vehicles that do, not the
ones that don’t
• In both cases, MA rationale for queries is
auditable
• (That caveat)
• Even if there are a lot of MBR featuring a
single cert, that might be an attacker
slandering that vehicle
• So some form of linkage query needs to be
carried out on the reporting certs
CAR / WAR strengths and weaknesses
CAR
• Strengths:
• Least burden on LA
• Weaknesses:
• Doesn’t catch instances where the keys are
extracted from a device and then used in
moderation in very different locations.
• Say I extract the keys from a device and
misbehave in New York and San Francisco with
different certs – how would clustering detect
that?
• If the question the MA can ask is “are *all* of
these certs from the same device”:
• Number of possible queries is exponential in
number of certs in the query
• Large queries  too many of them
• Small queries  fishing expedition
WAR
• Works best for a revocation
“badness function” where the
badness of the misbehaving
device is equal to the sum of the
badnesses of each of its
reports.
• Works less well in the case
where badness depends on
things like time or distance
between incidents.
Proposal
• Use CAR to set weights
• LA supports (SEQUENCE OF {cert identifier, weight, threshold})
API
Revocation
Oversight of revocation
• Revocation has the potential to be legally messy
• The system is withdrawing a privilege from a user…
• … which potentially exposes them to safety-of-life risks
• (You can’t argue that sending BSMs improves your safety without arguing that preventing you sending
them reduces your safety
• Conditions for revocation need to be clear
• Is there a right of review?
• Concern:
• We want to automate decisions, including revocation decisions, as much as possible
• Decisions made by people are expensive
• Decisions made by lawyers are extremely expensive
• How much do we need to anticipate and budget for review, appeals, and other
legal/administrative action?
• Proposal: Operating / business plan for SCMS manager should take this
into account and ensure processes are defensible
“Revocation” for malfunctioning OBEs
• If an OBE is malfunctioning, e.g. creating the “one lane left” situation, it
would be useful to notify it without permanently revoking its certs
• ”Standard” approach to revocation, i.e. publishing linkage seeds, doesn’t
allow this.
• Proposal for investigation: allow for “malfunction list”:
• Like CRL in that it is a signed cert management message
• Contains hash of cert rather than linkage seed
• Cert is cert that signed a misbehaving message
• MA can use investigation to only include one cert per vehicle
• OBEs retain the hashes of recently used certs to compare against malfunction list
• Malfunction list can be distributed only locally (10s of miles) to the observed
misbehavior as innocently malfunctioning vehicles will not in general travel far
• Other vehicles do not need to retain malfunction list information – vehicles check on
receipt whether they are included on the list, store the result of the check, and may
discard the list itself
Misbehavior for other applications
• SPaT: Traffic signals are out of phase with SPaT messages
• WSA: advertised services are not actually being offered
• DOS: not application specific but could be significant
• Ongoing question: who should be in charge of MBD design
• Security people?
• Application designers?
• Current situation in PC-land: anti-virus is written by specialist
developers
• … but that’s somewhat different situation
• Ongoing challenge: involve application researchers/specifiers in this
area even though they think of it as “security”
Misbehavior by SCMS Components
• A malicious MA can in principle break the privacy of any misbehavior
reporter/reported device
• Can the misbehavior investigation interface be limited without significantly
affecting MA’s detection capabilities?
• Can the misbehavior detection be distributed (instead of centralized at MA)
among LA, PCA, and MA?
• Can there be an effective oversight mechanism to keep a check on MA’s
malicious activities?
• Other SCMS components are also assumed to follow the protocols
honestly. What if they don’t? Examples:
• When a device is revoked, what if instead of blacklisting it, RA simply
replaces its linkage chain with a new one?
• What if an LA doesn’t follow the linkage value generation algorithm: creates
pre-linkage values that look normal but are not linked to one another?
• What if LA or PCA don’t answer misbehavior investigation queries honestly?
Large Scale Misbehavior/Malfunction
• Current revocation mechanisms are likely to succumb to large scale
revocation (> 1 million devices)
• Storage: Large CRLs are not only difficult to transmit but devices will most likely not
have the required storage space for them
• Computation: Every certificate verification requires checking the CRL, devices will
most likely not have the required computation power for that
Note: Revoking PCA/ICA won’t always solve the problem, specially if misbehaving
devices are distributed over multiple PCAs/ICAs
• SiriusXM have a broadcast encryption based certificate provisioning
proposal that may be better equipped to handle large scale revocation
• Trades “2-way communication for cert provisioning” with “higher storage in devices”
• Requires an always-ON broadcast communication
• Can handle essentially unlimited number of revocations
• Revocation is not CRL-based, doesn’t put burden on the device w.r.t.
storage/computation
Thank you!

More Related Content

Misbehavior Handling Throughout the V2V System Lifecycle

  • 1. Misbehavior handling throughout the V2V system lifecycle Virendra Kumar, Jonathan Petit, William Whyte
  • 2. Overview • This presentation does not present any particular results, but is intended to help think about how MBD will work within the system in the long run • Present a decomposition of misbehavior-related activity into four parts that can be considered independently • Local Misbehavior Detection / Reporting / Investigation / Revocation Decision • Rather than “What’s the best misbehavior [XXX] algorithm?”, ask “What do we do with the fact that the best available misbehavior detection algorithm will differ?” • Best known algorithm will vary from time to time • Best locally implementable algorithm may depend on the sensors available on a vehicle • Which of these algorithms need to be public? standardized? uniform? • Administrative considerations: privacy implications, incentives, oversight • What’s the right group to think about MBD going forward? • How can the future introduction of V2I systems into misbehavior reporting processes improve MD performance overall? • Apologies in advance: we haven’t seen the other presentations so some of this material may be redundant.
  • 3. Who are we representing? • Ourselves! • Work not funded through IEEE / CAMP / NYC Pilot • … therefore, nothing we say represents any other party’s opinion • … also, we can say “OBEs should do this” without worrying about self-certification etc. • … and we can talk about physical enforcement of revocation (i.e. police stopping revoked cars) without making it seem that OEMs (or anyone else) are encouraging it
  • 4. Proposed conclusions (spoilers!) • Two possible approaches • Open garden approach: many different vehicle-side approaches to MBD • Good detection, privacy risks for reporters • The MA knows considerable information about the reporting unit • Sensors, version of detection/reporting algorithm run, etc • It’s okay for only a small percentage of vehicles to have the best misbehavior detection / reporting • Privacy against MA must be managed • Uniform approach • A low number of distinct approaches to MBD • Fewer privacy risks to reporters, incentives to report are less of a concern • … but risks throwing away useful information • Recommendation: follow open garden approach to the greatest possible extent • In a world where vehicles may have a range of different capabilities, it’s not clear what metrics to use to evaluate success of MBD • These should be developed
  • 5. Proposed conclusions (2) • On the assumption that: 1. There will be relatively low numbers of attackers 2. There will be relatively low numbers of honest-but-malfunctioning devices … the system seems likely to be robust. • But it is not clear that we know much about assumption (2) • Closer study is warranted
  • 6. What is misbehavior? • Different types of anomaly: • Message structure validation • Message frequency • Message content 'plausibility' • Focus here is on message content plausibility • Note that examples where a private key is known to have been extracted allow us to bypass misbehavior detection and go straight to revocation • Galois has implemented verifiably correct BSM parser, we recommend the use of verifiably correct software • DOS attacks can be carried out with just a radio, no need for a cert  best way to address is to allow devices to report ”channel congestion in area X” so it can be investigated in the real world
  • 7. Possible goals of MBD • Reduce false alerts • Remove malicious vehicles from the system • Via revocation or physical activity • Notify malfunctioning vehicles that they need maintenance
  • 8. Cost points in the system • Vehicle equipment • Sensors • Processing power • For real-time data analysis • For real-time crypto operations • For background crypto operations • Storage for certificates • Storage for CRLs • Storage for other data • Maps, … • Connectivity • Off-vehicle • Upload capacity for MBD reports • Download capacity for CRLs • Storage for MBD • Processing for analysis and investigation (computer and human)
  • 9. Background: two misbehavior scenarios • “One lane off” • Innocent misbehavior • … but could cause forward collision warning in all overtaking vehicles • Active malware • Aware of MBD algorithms and tries to bypass • Impact: • If messages can be confirmed by other sensors, very hard for attacker to create e.g. FCW false alerts • DSRC message is just one sensor input, can be overruled by others • Key DSRC scenarios: • Intersection Movement Assist (IMA) • Extended Electronic Brake light (EEBL) • Impact analysis will emphasise the key DSRC scenarios
  • 10. Revocation Investigation Reporting Stages of the MBD process • Local Misbehavior Detection • Determining which messages to ignore • Reporting • Determining which messages to report • If there is a time delay between observing and reporting, deciding which messages to store • Investigation • How the MA goes about determining whether two or more certificates belong to the same vehicle • Revocation • What are the criteria for revocation? • Some are easy – e.g. same cert used in two different places at the same time – our focus is on revocation based on the message contents • PICTURE HEREDetection Vehicle MA LA PCA, RA, CRLG
  • 12. Local misbehavior detection v reporting• Starting premise: system capacity costs money • … for MBD reports • … for MBD storage • … for MBD analysis • … for CRL distribution • What can we say about using that capacity efficiently? • Prioritize reports that report messages that were not caught by local misbehavior detection • By our definition of local misbehavior, those messages were already filtered out • … but this works best if misbehaving messages will be locally detected by all vehicles • If LMBD is assisted by sensors or processing power there’s a tradeoff • Cost on car v system cost • Proper prioritization of reporting is helped by knowing what the baseline local detection capability is on other cars • Proposal: there should be some minimum performance requirements for LMBD, just as there are minimum performance requirements for sending BSMs • NOTE: RSEs can also do MBD • May be better than OBEs – up to date, no privacy concerns • In known locations so malicious attacker can avoid them • Could deter misbehavior in the first place • NOTE 2: DSRC-equipped cellphones could report MBD – this is something the system should support but not investigated further here
  • 13. Local misbehavior detection • On the boundary between “security” and “correct application operations” • Similar to “plausibility checking” – is there any real difference? • Filter out bad messages before they can cause alerts to be raised • Entirely local • Can be OEM-specific • Can be private • Can depend on other sensors / etc on the receiving car • Bad messages that successfully bypass local misbehavior detection can cause false alerts, possibly worse Raise alert? Good data? Good alert? Report ? DSRC Sensors Map No Yes Yes End Yes No No Yes
  • 14. Local misbehavior detection examples and cost • Basic plausibility checks • Speed too great, unrealistic turning angle, unrealistic acceleration • Can be carried out on a message-by-message basis • Consistency with sensors • Are messages consistent with other sensor input? • Note: not applicable to key DSRC scenarios • Can be carried out on a message-by-message basis, only for nearby cars • Consistency with RF • Is message direction consistent with RF? • Approach suggested by Battelle, needs dual antennas • Consistency between own messages • Brake status not consistent with deceleration observed between messages • Speed not consistent with distance between messages • Requires storage, processing linear in number of nearby vehicles • Consistency with map • Is trajectory consistent with lanes known to exist? • Requires storage, processing linear in number of nearby vehicles • Consistency with other messages • Does the trajectory implied by one car’s messages match the trajectory implied by other cars’? • Requires storage, processing (kind-of) more than linear in number of nearby vehicles • Hand-wavingly, n2 or n log n • In practice bounded above by kn. • BSM design is intended to enable vehicles to make collision avoidance determination on the basis of a single message – is this realistic?
  • 15. Example techniques applied to example scenarios: EEBL One lane to the left • Basic plausibility checks – not caught • Consistency with sensors – not caught • Consistency with RF – caught • Consistency between own messages – not caught • Consistency with map – possibly caught • Consistency with other messages – caught Malicious attacker • Basic plausibility checks – not caught • Consistency with sensors – not caught • Consistency with RF – caught • Consistency between own messages – not caught • Consistency with map – not caught • Consistency with other messages – possibly caught
  • 16. Example techniques applied to example scenarios: ICW One lane to the left • Doesn’t affect validity of IMA alert! • So does this count as misbehavior at all? • Basic plausibility checks – not caught • Consistency with sensors – not caught • Consistency with RF – probably not caught • Consistency between own messages – not caught • Consistency with map – possibly caught • Consistency with other messages – probably not caught Malicious attacker • Basic plausibility checks – not caught • Consistency with sensors – not caught • Consistency with RF – possibly caught • Consistency between own messages – not caught • Consistency with map – not caught • Consistency with other messages – probably not caught
  • 17. Local misbehavior detection: conclusions 1. The specific use cases that DSRC is most helpful for are difficult to catch on a message-by-message basis • Require more processing power than is necessary for a simple assessment of collision probability • Maps help catch accidental misbehavior • Directional RF helps catch some types of misbehavior • It would be interesting to understand the delta in processing power necessary for reliable alert generation between a fully trustworthy system and a system with malicious actors 2. No matter what level of local misbehavior detection is in place, some false alerts will likely be raised • Reporting will be necessary 3. Adaptive misbehavior rule updates (e.g., from centralized analytics at the MA) may have the ability to significantly reduce local processing power needs.
  • 18. Local MBD: answers to framing questions • Do local MBD algorithms need to be public? • Yes: allows creation of sensible reporting algorithms • No: allows OEMs to compete • Do local MBD algorithms need to be standardized? • Arguments from “public” apply, but also: • No: May make it hard to change algorithms if attackers work out ways around the current one • Do local MBD algorithms need to be uniform? • Arguments from “public” apply, but also: • No: creates lowest-common-denominator approach • Proposal: • There should be a minimum standard local MBD algorithm • OEMs should be free to develop other local MBD algorithms so long as they catch everything that is caught by the standard • No need for OEM-specific local MBD algorithms to be public
  • 19. Self diagnosis (h/t Rob Abramson) • Carry out local MBD on your own messages • Can also significantly reduce both reporting and revocation traffic if devices have self- diagnostics • Innocently malfunctioning device can detect that its messages are inconsistent with with other vehicles’ messages (or that it is raising a lot of alerts without seeing reactions) and • Shut itself off • Turn on check engine light • On next key-on can compare messages it “would have” sent with received messages • (… assuming that innocently malfunctioning device correctly implements self-diagnostics in the first place) • Should there be a standard / minimum performance reqts for self-diagnosis, along with assumption that this exists that informs decision to report? • Yes: If there are and they’re correctly implemented, reduce need to report/revoke. • No: Can’t assume they’re correctly implemented, need to report/revoke anyway. • Proposal: There should be some self-diagnosis requirements but they shouldn’t affect reporting criteria
  • 21. Reporting • How important is any single observation of misbehavior? • What is reported to the MA? • Should differently equipped vehicles be allowed to report different things? • What are the privacy implications? • What are the administrative implications?
  • 22. Basic approach • Messages that cause false alerts should be reported • How to determine false alerts? • Some metric for expected reaction severity • If a driver gets the alert and drives straight on, alert is clearly false • If a driver gets the alert and hard-brakes, alert may be true • Car may report false alerts on self or observe behavior consistent with false alerts on other vehicles • Reporting all alerts catches false alerts • …But runs risk of privacy violation • … + unintended use of system for enforcement activities against drivers • OEMs may have different alert algorithms, should they also have a globally agreed “alert” algorithm for reporting? • If a message is caught by local MBD, i.e. if it does not cause a false alert but would have if not caught, should it be reported? • If all vehicles can catch the message, no need for revocation • ... But reporting does allow misbehaving vehicles to be notified and/or physically located • See later discussion • Also, since this detection depends more on sensors, its reporting is more privacy- violating • Proposal: • Baseline: everyone reports false alert + no braking • Devices with greater filtering ability can report messages whose falseness is more ambiguous • Caught messages should be reported, but at lower priority than false-alert MBR
  • 23. Report contents • Authenticated BSMs (cannot forge BSM) • Own sensor data • A report that only contains authenticated BSMs doesn’t necessarily need to be signed • Privacy protection for reporters? • A report that contains other information does need to be signed • Allows deeper analysis but greater privacy risk
  • 24. Importance of one observation • Not all OBEs will have good connectivity, not all OBEs will be able to connect immediately • If OBEs connect once every 3 years (~1000 days) then .1% of vehicles will connect every day • If 5% of vehicles have always-on cellular connectivity* then 5% of vehicles could connect every day • * -- and don’t mind using it for MBR • Is this enough? • Claim: yes – we’re trying to catch significant disruption and misbehavior that affects less than 1000 cars isn’t “significant” • Implication: 5% penetration of cars with good connectivity is enough to catch significant disruption • May be no need for MBR over 5.9 GHz at all
  • 25. Different local reporting algorithms • Local: if different between vehicles, how to handle cases where a vehicle is accused of misbehaving because it was surrounded by more punitive vehicles? • Different algo per Vehicle type? • Algorithm depends on reporting vehicle type? • Algorithm depends on reportED vehicle type? • Can reporting algorithm be proprietary? • If so, can it be protected? Can it be figured out from input and output?
  • 26. Public, standardized, uniform? Public? • Yes: MA needs to know reporting criteria • Yes: publicly reviewed algorithm is more likely to be robust • No: Malicious attacker can work out ways around the algorithm • Proposal: Reporting algorithm should be known to MA, despite privacy risk if different devices use different different reporting algorithms • Implication: Need a way to identify or describe reporting algorithm Standardized? • Yes: Good to have clarity • Yes: Reduces privacy concerns • Yes: Avoids mistakes • No: Slow to change, attacker may work out way around it • Proposal: So long as known to MA, no need to standardize Uniform • Yes: Best privacy • No: Throws away ability to use sensor information from best- equipped devices • Proposal: No need to be uniform
  • 27. Reporting and privacy • Baseline MBR for false alert can contain BSMs from the two involved vehicles • Any third party can observe that the messages would likely cause an alert and that alerted vehicle did not react • (or did not react as strongly as expected) • Note that this is an argument for reporting messages that would have caused false alerts if not for sensor input • … because these messages cause alerts in less-well-equipped vehicles, so the fact that those vehicles do report those alerts compromises their privacy if others don’t • Can misbehavior reports be stripped of identifying information when stored on the reporting device, or when sent to the MA? • Need to send the original signed BSMs to avoid slander attacks • To avoid slander attacks via false alert based MBR, the MA must be able to investigate both BSM senders in the report (this has been known for a long time) • Innocent reporters in area of high misbehavior might inadvertently have their privacy compromised
  • 28. Reporting from more highly-equipped vehicles • Should highly-equipped vehicles be entitled to send more detailed misbehavior reports than vehicles with more basic equipment (sensors, etc)? • Proposal: Yes, but note argument above that false alerts can be detected from BSMs alone • Should they be *required* to send more detailed misbehavior report? • How do they indicate that they are entitled to send more detailed reports? • SSP? • Privacy concerns: • Revealing specific capabilities of car might identify car, or make it unique in particular area • This would be known only to MA but would create database that could be hacked • In the current design, BSMs and MBRs are signed by the same certificate • Therefore, including device-specific MBR SSP in BSM cert would compromise BSM privacy • Possible resolution: 1. See how much can be done with false alert detection based on BSMs alone 2. Permit more detailed misbehavior reports but… • Sign those with dedicated certs other than the BSM certs • As many MBR certs per week as BSM certs • Possibly have multiple sets corresponding to different capability levels and sign with the lowest capability levels 3. Separate MBR verification from analysis – one component verifies report signature, other uses contents • Not clear how this would work
  • 29. A free-rider problem • For any given OEM the incentive is for their vehicles not to report • Preserve privacy best • Free-ride on the reporting of others • How can this be avoided? • Keep stats on MBR from different OEMs’ vehicles? • Unlikely to be popular, even worse privacy properties • Testing? • Not clear what conditions to test for • And bad-faith OEM, which we’re assuming, can detect test environment and give fake results • Make reporting more privacy-preserving to reduce incentives to free-ride? • Proposal: Include requirement for reporting in minimum performance requirements, assume good-faith behavior on part of OEMs, assume that if one bad actor is caught it’ll scare the others • incentive to OEMs may be to report Misbehavior in exchange for new/updated MD rules?
  • 30. Liability in misbehavior reporting • Needs to be clarified • If someone is incorrectly revoked due to reports from another vehicle can they sue • … the MA? • … the reporting vehicle? • Legal status needs to be clear: ideal situation provides strong protection for reporters • What if there’s a malicious reporting algorithm? • Ford cars have an algorithm that selects target GM cars and “nudges” the BSM in the report to make it seem that there’s been an alert with slightly increased probability • … legal status needs to be clear: some body needs power to punish malicious reporting • …or simply 'poorly-reporting' algorithm? • SCMS may need to provide evidence to OEMs to prove that their MD algorithms need to be modified for improved effectiveness. • Proposal: VIIC or similar policy body should investigate how policy around reporting should be made and maintained.
  • 32. Overview • MA gets a series of MBR • There are some MBR that feature the same cert enough times that a revocation decision can be made on the basis of those MBR alone • (see caveat) • For others, MA needs to ask LA whether two messages belong to the same vehicle • Need to strike a balance between allowing MA to investigate and preventing it going on fishing expedition looking for linked certs Revocation Investigation Reporting Detection Vehicle MA LA PCA, RA, CRLG
  • 33. CAR and WAR • CAR • Cluster / Analyze / Resolve • MA clusters reports to determine which certs might go together • Asks LA “am I right?” • LA responds yes/no • WAR • Analyze / Weight / Resolve • Yes I know • For each cert, MA assigns a weight associated to the severity of the misbehavior reports • Asks LA “Does any vehicle cross a threshold?” • Learns only those vehicles that do, not the ones that don’t • In both cases, MA rationale for queries is auditable • (That caveat) • Even if there are a lot of MBR featuring a single cert, that might be an attacker slandering that vehicle • So some form of linkage query needs to be carried out on the reporting certs
  • 34. CAR / WAR strengths and weaknesses CAR • Strengths: • Least burden on LA • Weaknesses: • Doesn’t catch instances where the keys are extracted from a device and then used in moderation in very different locations. • Say I extract the keys from a device and misbehave in New York and San Francisco with different certs – how would clustering detect that? • If the question the MA can ask is “are *all* of these certs from the same device”: • Number of possible queries is exponential in number of certs in the query • Large queries  too many of them • Small queries  fishing expedition WAR • Works best for a revocation “badness function” where the badness of the misbehaving device is equal to the sum of the badnesses of each of its reports. • Works less well in the case where badness depends on things like time or distance between incidents.
  • 35. Proposal • Use CAR to set weights • LA supports (SEQUENCE OF {cert identifier, weight, threshold}) API
  • 37. Oversight of revocation • Revocation has the potential to be legally messy • The system is withdrawing a privilege from a user… • … which potentially exposes them to safety-of-life risks • (You can’t argue that sending BSMs improves your safety without arguing that preventing you sending them reduces your safety • Conditions for revocation need to be clear • Is there a right of review? • Concern: • We want to automate decisions, including revocation decisions, as much as possible • Decisions made by people are expensive • Decisions made by lawyers are extremely expensive • How much do we need to anticipate and budget for review, appeals, and other legal/administrative action? • Proposal: Operating / business plan for SCMS manager should take this into account and ensure processes are defensible
  • 38. “Revocation” for malfunctioning OBEs • If an OBE is malfunctioning, e.g. creating the “one lane left” situation, it would be useful to notify it without permanently revoking its certs • ”Standard” approach to revocation, i.e. publishing linkage seeds, doesn’t allow this. • Proposal for investigation: allow for “malfunction list”: • Like CRL in that it is a signed cert management message • Contains hash of cert rather than linkage seed • Cert is cert that signed a misbehaving message • MA can use investigation to only include one cert per vehicle • OBEs retain the hashes of recently used certs to compare against malfunction list • Malfunction list can be distributed only locally (10s of miles) to the observed misbehavior as innocently malfunctioning vehicles will not in general travel far • Other vehicles do not need to retain malfunction list information – vehicles check on receipt whether they are included on the list, store the result of the check, and may discard the list itself
  • 39. Misbehavior for other applications • SPaT: Traffic signals are out of phase with SPaT messages • WSA: advertised services are not actually being offered • DOS: not application specific but could be significant • Ongoing question: who should be in charge of MBD design • Security people? • Application designers? • Current situation in PC-land: anti-virus is written by specialist developers • … but that’s somewhat different situation • Ongoing challenge: involve application researchers/specifiers in this area even though they think of it as “security”
  • 40. Misbehavior by SCMS Components • A malicious MA can in principle break the privacy of any misbehavior reporter/reported device • Can the misbehavior investigation interface be limited without significantly affecting MA’s detection capabilities? • Can the misbehavior detection be distributed (instead of centralized at MA) among LA, PCA, and MA? • Can there be an effective oversight mechanism to keep a check on MA’s malicious activities? • Other SCMS components are also assumed to follow the protocols honestly. What if they don’t? Examples: • When a device is revoked, what if instead of blacklisting it, RA simply replaces its linkage chain with a new one? • What if an LA doesn’t follow the linkage value generation algorithm: creates pre-linkage values that look normal but are not linked to one another? • What if LA or PCA don’t answer misbehavior investigation queries honestly?
  • 41. Large Scale Misbehavior/Malfunction • Current revocation mechanisms are likely to succumb to large scale revocation (> 1 million devices) • Storage: Large CRLs are not only difficult to transmit but devices will most likely not have the required storage space for them • Computation: Every certificate verification requires checking the CRL, devices will most likely not have the required computation power for that Note: Revoking PCA/ICA won’t always solve the problem, specially if misbehaving devices are distributed over multiple PCAs/ICAs • SiriusXM have a broadcast encryption based certificate provisioning proposal that may be better equipped to handle large scale revocation • Trades “2-way communication for cert provisioning” with “higher storage in devices” • Requires an always-ON broadcast communication • Can handle essentially unlimited number of revocations • Revocation is not CRL-based, doesn’t put burden on the device w.r.t. storage/computation