paper_id
stringlengths 10
12
| title
stringlengths 15
176
| abstract
stringlengths 383
2.62k
| full_text
stringlengths 0
324k
| authors
stringlengths 5
457
| decision
stringclasses 4
values | year
int64 2.02k
2.02k
| api_raw_submission
stringlengths 2.11k
5.06k
| review
stringclasses 1
value | reviewer_id
stringlengths 44
47
| rating
null | confidence
stringclasses 5
values | api_raw_review
stringlengths 1.15k
19.3k
| criteria_count
dict | reward_value
int64 -10
-10
| reward_value_length_adjusted
float64 -2.67
-2.67
| length_penalty
float64 2.67
2.67
| reward_u
float64 0
0
| reward_h
float64 0
0
| meteor_score
float64 0
0
| criticism
int64 0
0
| example
int64 0
0
| importance_and_relevance
int64 0
0
| materials_and_methods
int64 0
0
| praise
int64 0
0
| presentation_and_reporting
int64 0
0
| results_and_discussion
int64 0
0
| suggestion_and_solution
int64 0
0
| dimension_scores
dict | overall_score
int64 -10
-10
| source
stringclasses 1
value | review_src
stringclasses 1
value | relative_rank
int64 0
0
| win_prob
float64 0
0
| thinking_trace
stringclasses 1
value | prompt
stringclasses 1
value | prompt_length
int64 0
0
| conversations
null |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
zzqBoIFOQ1
|
Guiding Safe Exploration with Weakest Preconditions
|
In reinforcement learning for safety-critical settings, it is often desirable for the agent to obey safety constraints at all points in time, including during training. We present a novel neurosymbolic approach called SPICE to solve this safe exploration problem. SPICE uses an online shielding layer based on symbolic weakest preconditions to achieve a more precise safety analysis than existing tools without unduly impacting the training process. We evaluate the approach on a suite of continuous control benchmarks and show that it can achieve comparable performance to existing safe learning techniques while incurring fewer safety violations. Additionally, we present theoretical results showing that SPICE converges to the optimal safe policy under reasonable assumptions.
|
Published as a conference paper at ICLR 2023
GUIDING SAFE EXPLORATION WITH
WEAKEST PRECONDITIONS
Greg Anderson, Swarat Chaudhuri ∗, Isil Dillig ∗,
Department of Computer Science
The University of Texas at Austin
Austin, TX, USA
{ganderso, swarat, isil}@cs.utexas.edu
ABSTRACT
In reinforcement learning for safety-critical settings, it is often desirable for the
agent to obey safety constraints at all points in time, including during training.
We present a novel neurosymbolic approach called SPICE to solve this safe explo-
ration problem. S PICE uses an online shielding layer based on symbolic weakest
preconditions to achieve a more precise safety analysis than existing tools without
unduly impacting the training process. We evaluate the approach on a suite of
continuous control benchmarks and show that it can achieve comparable perfor-
mance to existing safe learning techniques while incurring fewer safety violations.
Additionally, we present theoretical results showing that S PICE converges to the
optimal safe policy under reasonable assumptions.
1 I NTRODUCTION
In many real-world applications of reinforcement learning (RL), it is crucial for the agent to behave
safely during training. Over the years, a body of safe exploration techniques (Garcıa & Fern´andez,
2015) has emerged to address this challenge. Broadly, these methods aim to converge to high-
performance policies while ensuring that every intermediate policy seen during learning satisfies
a set of safety constraints. Recent work has developed neural versions of these methods (Achiam
et al., 2017; Dalal et al., 2018; Bharadhwaj et al., 2021) that can handle continuous state spaces and
complex policy classes.
Any method for safe exploration needs a mechanism for deciding if an action can be safely ex-
ecuted at a given state. Some existing approaches use prior knowledge about system dynamics
(Berkenkamp et al., 2017; Anderson et al., 2020) to make such judgments. A more broadly applica-
ble class of methods make these decisions using learned predictors represented as neural networks.
For example, such a predictor can be a learned advantage function over the constraints (Achiam
et al., 2017; Yang et al., 2020) or a critic network (Bharadhwaj et al., 2021; Dalal et al., 2018) that
predicts the safety implications of an action.
However, neural predictors of safety can require numerous potentially-unsafe environment interac-
tions for training and also suffer from approximation errors. Both traits are problematic in safety-
critical, real-world settings. In this paper, we introduce a neurosymbolic approach to learning safety
predictors that is designed to alleviate these difficulties.
Our approach, called SPICE 1, is similar to Bharadhwaj et al. (2021) in that we use a learned model to
filter out unsafe actions. However, the novel idea in SPICE is to use the symbolic method of weakest
preconditions (Dijkstra, 1976) to compute, from a single-time-step environment model, a predicate
that decides if a given sequence of future actions is safe. Using this predicate, we symbolically
compute a safety shield (Alshiekh et al., 2018) that intervenes whenever the current policy proposes
an unsafe action. The environment model is repeatedly updated during the learning process using
data safely collected using the shield. The computation of the weakest precondition and the shield
is repeated, leading to a more refined shield, on each such update.
∗equal advising
1SPICE is available at https://github.com/gavlegoat/spice.
1
Published as a conference paper at ICLR 2023
The benefit of this approach is sample-efficiency: to construct a safety shield for the next k time
steps, SPICE only needs enough data to learn a single-step environment model. We show this benefit
using an implementation of the method in which the environment model is given by a piecewise
linear function and the shield is computed through quadratic programming (QP). On a suite of chal-
lenging continuous control benchmarks from prior work, S PICE has comparable performance as
fully neural approaches to safe exploration and incurs far fewer safety violations on average.
In summary, this paper makes the following contributions:
• We present the first neurosymbolic framework for safe exploration with learned models of safety.
• We present a theoretical analysis of the safety and performance of our approach.
• We develop an efficient, QP-based instantiation of the approach and show that it offers greater
safety than end-to-end neural approaches without a significant performance penalty.
2 P RELIMINARIES
Safe Exploration. We formalize safe exploration in terms of a constrained Markov decision
process (CMDP) with a distinguished set of unsafe states. Specifically, a CMDP is a structure
M = (S, A, r, P, p0, c) where S is the set of states, A is the set of actions, r : S × A →R is a
reward function, P(x′ | x, u), where x, x′ ∈ Sand u ∈ A, is a probabilistic transition function,
p0 is an initial distribution over states, and c is a cost signal. Following prior work (Bharadhwaj
et al., 2021), we consider the case where the cost signal is a boolean indicator of failure, and we
further assume that the cost signal is defined by a set of unsafe states SU . That is, c(x) = 1 if
x ∈ SU and c(x) = 0 otherwise. A policy is a stochastic function π mapping states to distribu-
tions over actions. A policy, in interaction with the environment, generates trajectories (or rollouts)
x0, u0, x1, u1, . . . ,un−1, xn where x0 ∼ p0, each ui ∼ π(xi), and each xi+1 ∼ P(xi, ui). Con-
sequently, each policy induces probability distributions Sπ and Aπ on the state and action. Given a
discount factor γ <1, the long-term return of a policy π is R(π) = Exi,ui∼π
P
i γir(xi, ui)
.
The goal of standard reinforcement learning is to find a policy π∗ = arg max π R(π). Popu-
lar reinforcement learning algorithms accomplish this goal by developing a sequence of policies
π0, π1, . . . , πN such that πN ≈ π∗. We refer to this sequence of polices as a learning process.
Given a bound δ, the goal of safe exploration is to discover a learning process π0, . . . , πN such that
πN = arg maxπ R(π) and ∀1 ≤ i ≤ N. Px∼Sπi
(x ∈ SU ) < δ
That is, the final policy in the sequence should be optimal in terms of the long-term reward and every
policy in the sequence (except forπ0) should have a bounded probabilityδ of unsafe behavior. Note
that this definition does not place a safety constraint onπ0 because we assume that nothing is known
about the environment a priori.
Weakest Preconditions. Our approach to the safe exploration problem is built on weakest precon-
ditions (Dijkstra, 1976). At a high level, weakest preconditions allow us to “translate” constraints
on a program’s output to constraints on its input. As a very simple example, consider the function
x 7→ x+1. The weakest precondition for this function with respect to the constraintret >0 (where
ret indicates the return value) would be x >−1. In this work, the “program” will be a model of the
environment dynamics, with the inputs being state-action pairs and the outputs being states.
For the purposes of this paper, we present a simplified weakest precondition definition that is tailored
towards our setting. Let f : S × A →2S be a nondeterministic transition function. As we will see
in Section 4, f represents a PAC-style bound on the environment dynamics. We define an alphabet
Σ which consists of a set of symbolic actions ω0, . . . , ωH−1 and states χ0, . . . , χH. Each symbolic
state and action can be thought of as a variable representing an a priori unkonwn state and action.
Let ϕ be a first order formula over Σ. The symbolic states and actions represent a trajectory in the
environment defined by f, so they are linked by the relationχi+1 ∈ f(χi, ωi) for 0 ≤ i < H. Then,
for a given i, the weakest precondition of ϕ is a formula ψ over Σ \ {χi+1} such that (1) for all
e ∈ f(χi, ωi), we have ψ =⇒ ϕ[χi+1 7→ e] and (2) for all ψ′ satisfying condition (1), ψ′ =⇒ ψ.
Here, the notation ϕ[χi+1 7→ e] represents the formula ϕ with all instances of χi+1 replaced by
the expression e. Intuitively, the first condition ensures that, after taking one environment step from
χi under action ωi, the system will always satisfy ϕ, no matter how the nondeterminism of f is
resolved. The second condition ensures that ϕ is as permissive as possible, which prevents us from
ruling out states and actions that are safe in reality.
2
Published as a conference paper at ICLR 2023
3 S YMBOLIC PRECONDITIONS FOR CONSTRAINED EXPLORATION
Algorithm 1 The main learning algorithm
procedure SPICE
Initialize an empty dataset D and random policy π
for epoch in 1 . . . Ndo
if epoch = 1 then
πS ← π
else
πS ← λx.WPSHIELD (M, x, π(x))2
Unroll real trajectores {(si, ai, s′
i, ri)} under πS
D = D ∪ {(si, ai, s′
i, ri)}
M ← LEARN ENVMODEL (D)
Optimize π using the simulated environment M
Our approach, Symbolic Precondi-
tions for Constrained Exploration
(SPICE ), uses a learned environment
model to both improve sample effi-
ciency and support safety analysis at
training time. To do this, we build on
top of model-based policy optimiza-
tion (MBPO) (Janner et al., 2019).
Similar to MBPO, the model in our
approach is used to generate syn-
thetic policy rollout data which can
be fed into a model-free learning al-
gorithm to train a policy. In contrast
to MBPO, we reuse the environment
model to ensure the safety of the sys-
tem. This dual use of the environment allows for both efficient optimization and safe exploration.
The main training procedure is shown in Algorithm 1 and simultaneously learns an environment
model M and the policy π. The algorithm maintains a dataset D of observed environment tran-
sitions, which is obtained by executing the current policy π in the environment. S PICE then uses
this dataset to learn an environment M, which is used to optimize the current policy π, as done in
model-based RL. The key difference of our technique from standard model-based RL is the use of
a shielded policy πS when unrolling trajectories to construct dataset D. This is necessary for safe
exploration because executing π in the real environment could result in safety violations. In contrast
to prior work, the shielded policy πS in our approach is defined by an online weakest precondition
computation which finds a constraint over the action space which symbolically represents all safe
actions. This procedure is described in detail in Section 4.
4 S HIELDING WITH POLYHEDRAL WEAKEST PRECONDITIONS
4.1 O VERVIEW OF SHIELDING APPROACH
Algorithm 2 Shielding a proposed action
procedure WPSHIELD (M, x0, u∗
0)
f ← APPROXIMATE (M, x0, u∗
0)
ϕH ← VH
i=1 χi ∈ S \ SU
for t from H − 1 down to 0 do
ϕt ← WP(ϕt+1, f)
ϕ ← ϕ0[χ0 7→ x0]
(u0, . . . ,uH−1) = arg min
u′
0,...,u′
H−1⊨ϕ
∥u′
0−u∗
0∥2
return u0
Our high-level online intervention approach is
presented in Algorithm 2. Given an environ-
ment model M, the current state x0 and a
proposed action u∗
0, the WPSHIELD procedure
chooses a modified actionu0 which is as simi-
lar as possible to u∗
0 while ensuring safety. We
consider an action to be safe if, after executing
that action in the environment, there exists a
sequence of follow-up actions u1, . . . ,uH−1
which keeps the system away from the unsafe
states over a finite time horizonH. In more de-
tail, our intervention technique works in three
steps:
Approximating the environment. Because computing the weakest precondition of a constraint
with respect to a complex environment model (e.g., deep neural network) is intractable, Algorithm 2
calls the APPROXIMATE procedure to obtain a simpler first-order local Taylor approximation to the
environment model centered at (x0, u∗
0). That is, given the environment model M, it computes
matrices A and B, a vector c, and an error ε such that f(x, u) = Ax + Bu + c + ∆ where ∆ is
an unknown vector with elements in [−ε, ε]. The error term is computed based on a normal Taylor
series analysis such that with high probability, M(x, u) ∈ f(x, u) in a region close to x0 and u∗
0.
Computation of safety constraint.Given a linear approximationf of the environment, Algorithm 2
iterates backwards in time, starting with the safety constraint ϕH at the end of the time horizon H.
2Note that λ is an anonymous function operator rather than a regularization constant.
3
Published as a conference paper at ICLR 2023
In particular, the initial constraint ϕH asserts that all (symbolic) states χ1, . . . , χH reached within
the time horizon are inside the safe region. Then, the loop inside Algorithm 2 uses theWP procedure
(described in the next two subsections) to eliminate one symbolic state at a time from the formulaϕi.
After the loop terminates, all of the state variables except for χ0 have been eliminated from ϕ0, so
ϕ0 is a formula over χ0, ω0, . . . , ωH−1. The next line of Algorithm 2 simply replaces the symbolic
variable χ0 with the current state x0 in order to find a constraint over only the actions.
Projection onto safe space. The final step of the shielding procedure is to find a sequence
u0, . . . ,uH−1 of actions such that (1) ϕ is satisfied and (2) the distance ∥u0 − u∗
0∥ is minimized.
Here, the first condition enforces the safety of the shielded policy, while the second condition ensures
that the shielded policy is as similar as possible to the original one. The notationu0, . . . ,uH−1 ⊨ ϕ
indicates that ϕ is true when the concrete values u0, . . . ,uH−1 are substituted for the symbolic val-
ues ω0, . . . , ωH−1 in ϕ. Thus, the arg min in Algorithm 2 is effectively a projection on the set of
action sequences satisfying ϕ. We discuss this optimization problem in Section 4.4.
4.2 W EAKEST PRECONDITIONS FOR POLYHEDRA
In this section, we describe the WP procedure used in Algorithm 2 for computing the weakest pre-
condition of a safety constraint ϕ with respect to a linear environment model f. To simplify presen-
tation, we assume that the safe space is given as a convex polyhedron — i.e., all safe states satisfy
the linear constraint Px + q ≤ 0. We will show how to relax this restriction in Section 4.3.
Recall that our environment approximation f is a linear function with bounded error, so we have
constraints over the symbolic states and actions:χi+1 = Aχi+Bωi+c+∆ where ∆ is an unknown
vector with elements in[−ε, ε]. In order to compute the weakest precondition of a linear constraintϕ
with respect to f, we simply replace each instance ofχi+1 in ϕ with Aχi +Bωi +c+∆∗ where ∆∗
is the most pessimistic possibility for ∆. Because the safety constraints are linear and the expression
for χi+1 is also linear, this substitution results in a new linear formula which is a conjunction of
constraints of the form wT ν + vT ∆∗ ≤ y. For each element ∆i of ∆, if the coefficient of ∆∗
i is
positive in v, then we choose ∆∗
i = ε. Otherwise, we choose ∆∗
i = −ε. This substitution yields the
maximum value of vT ∆∗ and is therefore the most pessimistic possibility for ∆∗
i .
Figure 1: Weakest precondition example.
Example. We illustrate the weakest precondition
computation through simple example: Consider a
car driving down a (one-dimensional) road whose
goal is to reach the other end of the road as quickly
as possible while obeying a speed limit. The state
of the car is a position x and velocity v. The action
space consists of an acceleration a. Assume there
is bounded noise in the velocity updates so the dy-
namics are x′ = x + 0.1v and v′ = v + 0.1a + ε
where −0.01 ≤ ε ≤ 0.01 and the safety constraint
is v ≤ 1. Suppose the current velocity is v0 = 0.9
and the safety horizon is two. Then, starting with
the safety constraint v1 ≤ 1 ∧ v2 ≤ 1 and step-
ping back through the environment dynamics, we get
the precondition v1 ≤ 1 ∧ v1 + 0.1a1 + ε1 ≤ 1.
Stepping back one more time, we find the condition
v0 + 0.1a0 + ε2 ≤ 1 ∧ v0 + 0.1a0 + 0.1a1 + ε1 + ε2 ≤ 1. Picking the most pessimistic values for
ε1 and ε2 to reach v0 + 0.1a0 + 0.01 ≤ 1 ∧ v0 + 0.1a0 + 0.1a1 + 0.02 ≤ 1. Since v0 is specified,
we can replace v0 with 0.9 to simplify this to a constraint over the two actions a0 and a1, namely
0.91 + 0.1a0 ≤ 1 ∧ 0.92 + 0.1a0 + 0.1a1 ≤ 1. Figure 1 shows this region as the shaded triangle
on the left. Any pair of actions (a0, a1) which lies inside the shaded triangle is guaranteed to satisfy
the safety condition for any possible values of ε1 and ε2.
4.3 E XTENSION TO MORE COMPLEX SAFETY CONSTRAINTS
In this section, we extend our weakest precondition computation technique to the setting where
the safe region consists of unions of convex polyhedra. That is, the state space is represented as
a set of matrices Pi and a set of vectors qi such that S \ SU = SN
i=1 {x ∈ S |Pix + qi ≤ 0}.
4
Published as a conference paper at ICLR 2023
Note that, while individual polyhedra are limited in their expressive power, unions of polyhedra can
approximate reasonable spaces with arbitrary precision. This is because a single polyhedron can
approximate a convex set arbitrarily precisely (Bronshteyn & Ivanov, 1975), so unions of polyhedra
can approximate unions of convex sets.
In this case, the formula ϕH in Algorithm 2 has the form ϕH = VH
j=1
WN
i=1 Piχj + qi ≤ 0.
However, the weakest precondition of a formula of this kind can be difficult to compute. Because
the system may transition between two different polyhedra at each time step, there is a combi-
natorial explosion in the size of the constraint formula, and a corresponding exponential slow-
down in the weakest precondition computation. Therefore, we replace ϕH with an approximation
ϕ′
H = WN
i=1
VH
j=1 Piχj + qi ≤ 0 (that is, we swap the conjunction and the disjunction). Note that
ϕH and ϕ′
H are not equivalent, but ϕ′
H is a stronger formula (i.e., ϕ′
H =⇒ ϕH). Thus, any states
satisfying ϕ′
H are also guaranteed to satisfy ϕH, meaning that they will be safe. More intuitively,
this modification asserts that, not only does the state stay within the safe region at each time step,
but it stays within the same polyhedron at each step within the time horizon.
With this modified formula, we can pull the disjunction outside the weakest precondition, i.e.,
WP
_N
i=1
^H
j=1
Piχj + qi ≤ 0, f
=
_N
i=1
WP
^H
j=1
Piχj + qi ≤ 0, f
.
The conjunctive weakest precondition on the right is of the form described in Section 4.2, so this
computation can be done efficiently. Moreover, the number of disjuncts does not grow as we iterate
through the loop in Algorithm 2. This prevents the weakest precondition formula from growing out
of control, allowing for the overall weakest precondition on ϕ′
H to be computed quickly.
Intuitively, the approximation we make to the formulaϕH does rule out some potentially safe action
sequences. This is because it requires the system to stay within a single polyhedron over the entire
horizon. However, this imprecision can be ameliorated in cases where the different polyhedra com-
prising the state space overlap one another (and that overlap has non-zero volume). In that case, the
overlap between the polyhedra serves as a “transition point,” allowing the system to maintain safety
within one polyhedron until it enters the overlap, and then switch to the other polyhedron in order
to continue its trajectory. A formal development of this property, along with an argument that it is
satisfied in many practical cases, is laid out in Appendix B.
Example. Consider an environment which represents a robot moving in 2D space. The state space
is four-dimensional, consisting of two position elements x and y and two velocity elements vx and
vy. The action space consists of two acceleration terms ax and ay, giving rise to the dynamics
x = x + 0.1vx y = y + 0.1vy
vx = vx + 0.1ax vy = vy + 0.1ay
In this environment, the safe space is x ≥ 2 ∨ y ≤ 1, so that the upper-left part of the state space
is considered unsafe. Choosing a safety horizon of H = 2 , we start with the initial constraint
(x1 ≥ 2 ∨ y1 ≤ 1) ∧ (x1 ≥ 2 ∧ y2 ≤ 1). We transform this formula to the stronger formua
(x1 ≥ 2 ∧ x2 ≥ 2) ∨ (y1 ≤ 1 ∧ y2 ≤ 1). By stepping backwards through the weakest precondition
twice, we obtain the following formula over only the current state and future actions:
(x0 + 0.1vx
0 ≥ 2 ∧ x0 + 0.2vx
0 + 0.01ax
0 ≥ 2) ∨ (y0 + 0.1vy
0 ≤ 1 ∧ y0 + 0.2vy
0 + 0.01ay
0 ≤ 1).
4.4 P ROJECTION ONTO THE WEAKEST PRECONDITION
After applying the ideas from Section 4.3, each piece of the safe space yields a set of linear con-
straints over the action sequence u0, . . . ,uH−1. That is, ϕ from Algorithm 2 has the form
ϕ =
_N
i=1
XH−1
j=0
Gi,juj + hi ≤ 0.
Now, we need to find the action sequence satisfyingϕ for which the first action most closely matches
the proposed actionu∗
0. In order to do this, we can minimize the objective function∥u0 −u∗
0∥2. This
function is quadratic, so we can represent this minimization problem as N quadratic programming
problems. That is, for each polyhedron Pi, qi in the safe region, we solve:
minimize ∥u∗
0 − u0∥2
subject to
XH−1
j=0
Gi,juj + hi ≤ 0
5
Published as a conference paper at ICLR 2023
Such problems can be solved efficiently using existing tools. By applying the same technique inde-
pendently to each piece of the safe state space, we reduce the projection problem to a relatively small
number of calls to a quadratic programming solver. This reduction allows the shielding procedure
to be applied fast enough to generate the amount of data needed for gradient-based learning.
Example: Consider again Figure 1. Suppose the proposed action isu∗
0 = 1, represented by the solid
line in Figure 1. Since the proposed action is outside of the safe region, the projection operation will
find the point inside the safe region that minimizes the distance along the a0 axis only. This leads
to the dashed line in Figure 1, which is the action u0 that is as close as possible to u∗
0 while still
intersecting the safe region represented by the shaded triangle. Therefore, in this case, W PSHIELD
would return 0.8 as the safe action.
5 T HEORETICAL RESULTS
We will now develop theoretical results on the safety and performance of agents trained with SPICE .
For brevity, proofs have been deferred to Appendix A.
For the safety theorem, we will assume the model is approximately accurate with high probability
and that the A PPROXIMATE procedure gives a sound local approximation to the model. Formally,
Prx′∼P(·|x,u)[∥M(x, u) − x′∥ > ε] < δM , and if f = APPROXIMATE (M, x0, u∗
0) then for all
actions u and all states x reachable within H time steps, M(x, u) ∈ f(x, u).
Theorem 1. Let x0 be a safe state and let π be any policy. For 0 ≤ i < H, let ui =
WPSHIELD (M, xi, π(xi)) and let xi+1 be the result of taking action ui at state xi. Then with
probability at least (1 − δM )i, xi is safe.
This theorem shows why SPICE is better able to maintain safety compared to prior work. Intuitively,
constraint violations can only occur in SPICE when the environment model is incorrect. In contrast,
statistical approaches to safe exploration are subject to safety violations caused by either modeling
error or actions which are not safe even with respect to the environment model. Note that for a
safety level δ and horizon H, a modeling error can be computed as δM < 1 −(1 −δ)/ exp(H −1).
The performance analysis is based on treating Algorithm 1 as a functional mirror descent in the
policy space, similar to Verma et al. (2019) and Anderson et al. (2020). We assume a class of
neural policies F, a class of safe policies G, and a joint class H of neurosymbolic policies. We
proceed by considering the shielded policy λx.WPSHIELD (M, x, πN (x)) to be a projection of the
neural policy πN into G for a Bregman divergence DF defined by a function F. We define a safety
indicator Z which is one whenever W PSHIELD (M, x, π(i)(x)) = π(i)(x) and zero otherwise, and
we let ζ = E[1 − Z]. Under reasonable assumptions (see Appendix A for a full discussion), we
prove a regret bound for Algorithm 1.
Theorem 2. Let π(i)
S for 1 ≤ i ≤ T be a sequence of safe policies learned by SPICE (i.e., π(i)
S =
λx.WPSHIELD (M, x, π(x))) and let π∗
S be the optimal safe policy. Additionally we assume the
reward functionR is Lipschitz in the policy space and letLR be the Lipschitz constant ofR, β and σ2
be the bias and variance introduced by sampling in the gradient computation, ϵ be an upper bound
on the bias incurred by using projection onto the weakest precondition to approximate imitation
learning, ϵm be an upper bound the KL divergence between the model and the true environment
dynamics at all time steps, and ϵπ be an upper bound on the TV divergence between the policy
used to gather data and the policy being trained at all time steps. Then setting the learning rate
η =
q
1
σ2
1
T + ϵ
, we have the expected regret bound:
R (π∗
S) − E
1
T
XT
i=1
R
π(i)
S
= O
σ
r
1
T + ϵ + β + LRζ + ϵm + ϵπ
!
This theorem provides a few intuitive results, based on the additive terms in the regret bound. First,
ζ is the frequency with which we intervene in network actions and as ζ decreases, the regret bound
becomes tighter. This fits our intuition that, as the shield intervenes less and less, we approach
standard reinforcement learning. The two terms ϵm and ϵπ are related to how accurately the model
captures the true environment dynamics. As the model becomes more accurate, the policy converges
to better returns. The other terms are related to standard issues in reinforcement learning, namely
the error incurred by using sampling to approximate the gradient.
6
Published as a conference paper at ICLR 2023
(a) car-racing
(b) noisy-road-2d
(c) obstacle
(d) obstacle2
(e) pendulum
(f) road-2d
Figure 2: Cumulative safety violations over time.
6 E XPERIMENTAL EVALUATION
We now turn to a practical evaluation of SPICE . Our implementation of S PICE uses PyEarth (Rudy,
2013) for model learning and CVXOPT (Anderson et al., 2022) for quadratic programming. Our
learning algorithm is based on MBPO (Janner et al., 2019) using Soft Actor-Critic (Haarnoja et al.,
2018a) as the underlying model-free learning algorithm. Our code is adapted from Tandon (2018).
We test S PICE using the benchmarks considered in Anderson et al. (2020). Further details of the
benchmarks and hyperparameters are given in Appendix C.
Benchmark CPO CSC-MBPO S PICE
acc 684 137 286
car-racing 2047 1047 1169
mountain-car 2374 2389 6
noisy-road 0 0 0
noisy-road-2d 286 37 31
obstacle 708 124 2
obstacle2 5592 1773 1861
pendulum 1933 2610 1211
road 0 0 0
road-2d 103 64 41
Average 9.48 3.77 1
Table 1: Safety violations during training.
We compare against two baseline approaches: Con-
strained Policy Optimization (CPO) (Achiam et al.,
2017), a model-free safe learning algorithm, and a
version of our approach which adopts the conserva-
tive safety critic shielding framework from Bharad-
hwaj et al. (2021) (CSC-MBPO). Details of the
CSC-MBPO approach are given in Appendix C. We
additionally tested MPC-RCE (Liu et al., 2020), an-
other model-based safe-learning algorithm, but we
find that it is too inefficient to be run on our bench-
marks. Specifically MPC-RCE was only able to fin-
ish on average 162 episodes within a 2-day time pe-
riod. Therefore, we do not include MPC-RCE in the
results presented in this section.
Safety. First, we evaluate how well our approach
ensures system safety during training. In Table 1,
we present the number of safety violations encountered during training for our baselines. The last
row of the table shows the average increase in the number of safety violations compared to S PICE
(computed as the geometric mean of the ratio of safety violations for each benchmark). This table
shows that SPICE is safer than CPO in every benchmark and achieves, on average, a 89% reduction
in safety violations. CSC-MBPO is substantially safer than CPO, but still not as safe as S PICE . We
achieve a 73% reduction in safety violations on average compared to CSC-MBPO. To give a more
detailed breakdown, Figure 2 shows how the safety violations accumulate over time for several of
our benchmarks. The solid line represents the mean over all trials while the shaded envelope shows
the minimum and maximum values. As can be seen from these figures, CPO starts to accumulate
violations more quickly and continues to violate the safety property more over time than S PICE .
Figures for the remaining benchmarks can be found in Appendix C.
Note that there are a few benchmarks (acc, car-racing, and obstacle2) where S PICE incurs more
violations than CSC-MBPO. There are two potential reasons for this increase. First, S PICE relies
7
Published as a conference paper at ICLR 2023
(a) car-racing
(b) noisy-road-2d
(c) obstacle
(d) obstacle2
(e) pendulum
(f) road-2d
Figure 3: Training curves for SPICE and CPO.
on choosing a model class through which to compute weakest preconditions (i.e., we need to fix an
APPROXIMATE function in Algorithm 2). For these experiments, we use a linear approximation,
but this can introduce a lot of approximation error. A more complex model class allowing a more
precise weakest precondition computation may help to reduce safety violations. Second, SPICE uses
a bounded-time analysis to determine whether a safety violation can occur within the next few time
steps. By contrast, CSC-MBPO uses a neural model to predict the long-term safety of an action. As
a result, actions which result in safety violations far into the future may be easier to intercept using
the CSC-MBPO approach. Given that S PICE achieves much lower safety violations on average, we
think these trade-offs are desirable in many situations.
Figure 4: Trajectories early in training.
Performance. We also test the performance of the
learned policies on each benchmark in order to under-
stand what impact our safety techniques have on model
learning. Figure 3 show the average return over time for
SPICE and the baselines. These curves show that in most
cases SPICE achieves a performance close to that of CPO,
and about the same as CSC-MBPO. We believe that the
relatively modest performance penalty incurred by SPICE
is an acceptable trade-off in some safety-critical systems
given the massive improvement in safety. Further results
are presented in Appendix C.
Qualitative Evaluation. Figure 4 shows the trajectories
of policies learned by SPICE , CPO, and CSC-MBPO part-
way through training (after 300 episodes). In this figure, the agent controls a robot moving on a 2D
plane which must reach the green shaded area while avoiding the red cross-hatched area. For each
algorithm, we sampled 100 trajectories and plotted the worst one. Note that, at this point during
training, both CPO and CSC-MBPO behave unsafely, while the even worst trajectory sampled un-
der SPICE was still safe. See Appendix C for a more complete discussion of how these trajectories
evolve over time during training.
7 R ELATED WORK
Existing work in safe reinforcement learning can be categorized by the kinds of guarantees it pro-
vides: statistical approaches bound the probability that a violation will occur, while worst-case
analyses prove that a policy can never reach an unsafe state. S PICE is, strictly speaking, a statis-
tical approach—without a predefined environment model, we cannot guarantee that the agent will
8
Published as a conference paper at ICLR 2023
never behave unsafely. However, as we show experimentally, our approach is substantially safer in
practice than existing safe learning approaches based on learned cost models.
Statistical Approaches. Many approaches to the safe reinforcement learning problem provide sta-
tistical bounds on the system safety (Achiam et al., 2017; Liu et al., 2020; Yang et al., 2020; Ma
et al., 2021; Zhang et al., 2020; Satija et al., 2020). These approaches maintain an environment
model and then use a variety of statistical techniques to generate a policy which is likely to be safe
with respect to the environment model. This leads to two potential sources of unsafe behavior: the
policy may be unsafe with respect to the model, or the model may be an inaccurate representation
of the environment. Compared to these approaches, we eliminate the first source of error by always
generating policies that are guaranteed to be safe with respect to an environment model. We show
in our experiments that this drastically reduces the number of safety violations encountered in prac-
tice. Some techniques use a learned model together with a linearized cost model to provide bounds,
similar to our approach (Dalal et al., 2018; Li et al., 2021). However, in these works, the system
only looks ahead one time step and relies on the assumption that the cost signal cannot have too
much inertia. Our work alleviates this problem by providing a way to look ahead several time steps
to achieve a more precise safety analysis.
A subset of the statistical approaches are tools that maintain neural models of the cost function in
order to intervene in unsafe behavior (Bharadhwaj et al., 2021; Yang et al., 2021; Yu et al., 2022).
These approaches maintain a critic network which represents the long-term cost of taking a particular
action in a particular state. However, because of the amount of data needed to train neural networks
accurately, these approaches suffer from a need to collect data in several unsafe trajectories in order
to build the cost model. Our symbolic approach is more data-efficient, allowing the system to avoid
safety violations more often in practice. This is borne out by the experiments in Section 6.
Worst-Case Approaches. Several existing techniques for safe reinforcement learning provide for-
mally verified guarantees with respect to a worst-case environment model, either during training
(Anderson et al., 2020) or at convergence (Alshiekh et al., 2018; Bacci et al., 2021; Bastani et al.,
2018; Fulton & Platzer, 2019; Gillula & Tomlin, 2012; Zhu et al., 2019). An alternative class of
approaches uses either a nominal environment model (Koller et al., 2018; Fisac et al., 2019) or
a user-provided safe policy as a starting point for safe learning (Chow et al., 2018; Cheng et al.,
2019). In both cases, these techniques require a predefined model of the dynamics of the environ-
ment. In contrast, our technique does not require the user to specify any model of the environment,
so it can be applied to a much broader set of problems.
8 C ONCLUSION
SPICE is a new approach to safe exploration that combines the advantages of gradient-based learn-
ing with symbolic reasoning about safety. In contrast to prior work on formally verified exploration
(Anderson et al., 2020), SPICE can be used without a precise, handwritten specification of the envi-
ronment behavior. The linchpin of our approach is a new policy intervention which can efficiently
intercept unsafe actions and replace them with actions which are as similar as possible, but prov-
ably safe. This intervention method is fast enough to support the data demands of gradient-based
reinforcement learning and precise enough to allow the agent to explore effectively.
There are a few limitations to S PICE . Most importantly, because the interventions are based on a
linearized environment model, they are only accurate in a relatively small region near the current
system state. This in turn limits the time horizons which can be considered in the safety analysis,
and therefore the strength of the safety properties. Our experiments show that S PICE is still able
to achieve good empirical safety in this setting, but a more advanced policy intervention that can
handle more complex environment models could further improve these results. Additionally, SPICE
can make no safety guarantees about the initial policy used to construct the first environment model,
since there is no model to verify against at the time that that policy is executed. This issue could
be alleviated by assuming a conservative initial model which is refined over time, and there is an
interesting opportunity for future work combining partial domain expertise with learned dynamics.
9
Published as a conference paper at ICLR 2023
FUNDING ACKNOWLEDGEMENTS
This work was supported in part by the United States Air Force and DARPA under Contract No.
FA8750-20-C-0002, by ONR under Award No. N00014-20-1-2115, and by NSF under grants CCF-
1901376 and CCF-1918889. Compute resources for the experiments were provided by the Texas
Advanced Computing Center.
REFERENCES
Joshua Achiam, David Held, Aviv Tamar, and Pieter Abbeel. Constrained policy optimization. In
Proceedings of the 34th International Conference on Machine Learning - Volume 70 , ICML’17,
pp. 22–31. JMLR.org, 2017.
Mohammed Alshiekh, Roderick Bloem, R ¨udiger Ehlers, Bettina K ¨onighofer, Scott Niekum, and
Ufuk Topcu. Safe reinforcement learning via shielding. In Sheila A. McIlraith and Kil-
ian Q. Weinberger (eds.), Proceedings of the Thirty-Second AAAI Conference on Artificial In-
telligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and
the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New
Orleans, Louisiana, USA, February 2-7, 2018 , pp. 2669–2678. AAAI Press, 2018. URL
https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/view/17211.
Greg Anderson, Abhinav Verma, Isil Dillig, and Swarat Chaudhuri. Neurosymbolic reinforcement
learning with formally verified exploration. In H. Larochelle, M. Ranzato, R. Hadsell, M.F.
Balcan, and H. Lin (eds.), Advances in Neural Information Processing Systems , volume 33, pp.
6172–6183. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/
paper/2020/file/448d5eda79895153938a8431919f4c9f-Paper.pdf.
Martin Anderson, Joachim Dahl, and Lieven Vandenberghe. CVXOPT. https://github.
com/cvxopt/cvxopt, 2022.
Edoardo Bacci, Mirco Giacobbe, and David Parker. Verifying reinforcement learning up to infinity.
In Zhi-Hua Zhou (ed.), Proceedings of the Thirtieth International Joint Conference on Artificial
Intelligence, IJCAI-21, pp. 2154–2160. International Joint Conferences on Artificial Intelligence
Organization, 8 2021. doi: 10.24963/ijcai.2021/297. URL https://doi.org/10.24963/
ijcai.2021/297. Main Track.
Osbert Bastani, Yewen Pu, and Armando Solar-Lezama. Verifiable reinforcement learning via policy
extraction. In Advances in Neural Information Processing Systems, pp. 2494–2504, 2018.
Felix Berkenkamp, Matteo Turchetta, Angela Schoellig, and Andreas Krause. Safe model-based
reinforcement learning with stability guarantees. Advances in neural information processing sys-
tems, 30, 2017.
Homanga Bharadhwaj, Aviral Kumar, Nicholas Rhinehart, Sergey Levine, Florian Shkurti, and Ani-
mesh Garg. Conservative safety critics for exploration. In International Conference on Learning
Representations, 2021. URL https://openreview.net/forum?id=iaO86DUuKi.
E. M. Bronshteyn and L. D. Ivanov. The approximation of of convex sets by polyhedra.Sib Math J,
16(5):852–853, September-October 1975. doi: 10.1007/BF00967115.
Richard Cheng, G ´abor Orosz, Richard M. Murray, and Joel W. Burdick. End-to-end safe re-
inforcement learning through barrier functions for safety-critical continuous control tasks. In
Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence , AAAI’19. AAAI
Press, 2019. ISBN 978-1-57735-809-1. doi: 10.1609/aaai.v33i01.33013387. URL https:
//doi.org/10.1609/aaai.v33i01.33013387.
Yinlam Chow, Ofir Nachum, Edgar Duenez-Guzman, and Mohammad Ghavamzadeh. A lyapunov-
based approach to safe reinforcement learning. In Proceedings of the 32nd International Con-
ference on Neural Information Processing Systems, NeurIPS’18, pp. 8103–8112, Red Hook, NY ,
USA, 2018. Curran Associates Inc.
10
Published as a conference paper at ICLR 2023
Gal Dalal, Krishnamurthy Dvijotham, Matej Vecerik, Todd Hester, Cosmin Paduraru, and Yuval
Tassa. Safe exploration in continuous action spaces. arXiv preprint arXiv:1801.08757, 2018.
Edsger Wybe Dijkstra. A discipline of programming, volume 613924118. prentice-hall Englewood
Cliffs, 1976.
Jaime F. Fisac, Anayo K. Akametalu, Melanie N. Zeilinger, Shahab Kaynama, Jeremy Gillula, and
Claire J. Tomlin. A general safety framework for learning-based control in uncertain robotic
systems. IEEE Transactions on Automatic Control, 64(7):2737–2752, 2019. doi: 10.1109/TAC.
2018.2876389.
Nathan Fulton and Andr´e Platzer. Verifiably safe off-model reinforcement learning. InInternational
Conference on Tools and Algorithms for the Construction and Analysis of Systems, pp. 413–430.
Springer, 2019.
Javier Garcıa and Fernando Fern ´andez. A comprehensive survey on safe reinforcement learning.
Journal of Machine Learning Research, 16(1):1437–1480, 2015.
Jeremy H. Gillula and Claire J. Tomlin. Guaranteed safe online learning via reachability: tracking a
ground target using a quadrotor. In IEEE International Conference on Robotics and Automation,
ICRA 2012, 14-18 May, 2012, St. Paul, Minnesota, USA , pp. 2723–2730, 2012. doi: 10.1109/
ICRA.2012.6225136. URL https://doi.org/10.1109/ICRA.2012.6225136.
Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. Soft actor-critic: Off-policy
maximum entropy deep reinforcement learning with a stochastic actor. In Jennifer Dy and An-
dreas Krause (eds.), Proceedings of the 35th International Conference on Machine Learning ,
volume 80 of Proceedings of Machine Learning Research , pp. 1861–1870. PMLR, 10–15 Jul
2018a. URL https://proceedings.mlr.press/v80/haarnoja18b.html.
Tuomas Haarnoja, Aurick Zhou, Kristian Hartikainen, George Tucker, Sehoon Ha, Jie Tan, Vikash
Kumar, Henry Zhu, Abhishek Gupta, Pieter Abbeel, and Sergey Levine. Soft actor-critic algo-
rithms and applications, 2018b. URL https://arxiv.org/abs/1812.05905.
Michael Janner, Justin Fu, Marvin Zhang, and Sergey Levine. When to trust your model: Model-
based policy optimization. In Proceedings of the 33rd International Conference on Neural Infor-
mation Processing Systems, Red Hook, NY , USA, 2019. Curran Associates Inc.
Torsten Koller, Felix Berkenkamp, Matteo Turchetta, and Andreas Krause. Learning-based model
predictive control for safe exploration. In2018 IEEE Conference on Decision and Control (CDC),
pp. 6059–6066, 2018. doi: 10.1109/CDC.2018.8619572.
Yutong Li, Nan Li, H. Eric Tseng, Anouck Girard, Dimitar Filev, and Ilya Kolmanovsky. Safe
reinforcement learning using robust action governor. In Ali Jadbabaie, John Lygeros, George J.
Pappas, Pablo A. Parrilo, Benjamin Recht, Claire J. Tomlin, and Melanie N. Zeilinger (eds.),
Proceedings of the 3rd Conference on Learning for Dynamics and Control , volume 144 of Pro-
ceedings of Machine Learning Research , pp. 1093–1104. PMLR, 07 – 08 June 2021. URL
https://proceedings.mlr.press/v144/li21b.html.
Zuxin Liu, Hongyi Zhou, Baiming Chen, Sicheng Zhong, Martial Hebert, and Ding Zhao. Con-
strained model-based reinforcement learning with robust cross-entropy method, 2020.
Yecheng Jason Ma, Andrew Shen, Osbert Bastani, and Dinesh Jayaraman. Conservative and adap-
tive penalty for model-based safe reinforcement learning, 2021. URL https://arxiv.org/
abs/2112.07701.
Jason Rudy. PyEarth. https://github.com/scikit-learn-contrib/py-earth ,
2013.
Harsh Satija, Philip Amortila, and Joelle Pineau. Constrained markov decision processes via back-
ward value functions. In Proceedings of the 37th International Conference on Machine Learning,
ICML’20. JMLR.org, 2020.
11
Published as a conference paper at ICLR 2023
Richard S. Sutton, David McAllester, Satinder Singh, and Yishay Mansour. Policy gradient methods
for reinforcement learning with function approximation. In Proceedings of the 12th International
Conference on Neural Information Processing Systems , NIPS’99, pp. 1057–1063, Cambridge,
MA, USA, 1999. MIT Press.
Pranjal Tandon. PyTorch Soft Actor-Critic. https://github.com/pranz24/
pytorch-soft-actor-critic , 2018.
Abhinav Verma, Hoang Le, Yisong Yue, and Swarat Chaudhuri. Imitation-projected programmatic
reinforcement learning. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alch ´e-Buc, E. Fox,
and R. Garnett (eds.), Advances in Neural Information Processing Systems , volume 32. Cur-
ran Associates, Inc., 2019. URL https://proceedings.neurips.cc/paper/2019/
file/5a44a53b7d26bb1e54c05222f186dcfb-Paper.pdf.
Qisong Yang, Thiago D. Sim ˜ao, Simon H Tindemans, and Matthijs T. J. Spaan. Wcsac: Worst-
case soft actor critic for safety-constrained reinforcement learning. Proceedings of the AAAI
Conference on Artificial Intelligence, 35(12):10639–10646, May 2021. URL https://ojs.
aaai.org/index.php/AAAI/article/view/17272.
Tsung-Yen Yang, Justinian Rosca, Karthik Narasimhan, and Peter J Ramadge. Projection-based
constrained policy optimization. arXiv preprint arXiv:2010.03152, 2020.
Dongjie Yu, Haitong Ma, Shengbo Li, and Jianyu Chen. Reachability constrained reinforcement
learning. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and
Sivan Sabato (eds.), Proceedings of the 39th International Conference on Machine Learning ,
volume 162 of Proceedings of Machine Learning Research, pp. 25636–25655. PMLR, 17–23 Jul
2022. URL https://proceedings.mlr.press/v162/yu22d.html.
Yiming Zhang, Quan Vuong, and Keith Ross. First order constrained optimization in pol-
icy space. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (eds.), Ad-
vances in Neural Information Processing Systems , volume 33, pp. 15338–15349. Curran As-
sociates, Inc., 2020. URL https://proceedings.neurips.cc/paper/2020/file/
af5d5ef24881f3c3049a7b9bfe74d58b-Paper.pdf.
He Zhu, Zikang Xiong, Stephen Magill, and Suresh Jagannathan. An inductive synthesis framework
for verifiable reinforcement learning. InACM Conference on Programming Language Design and
Implementation (SIGPLAN), 2019.
A P ROOFS OF THEOREMS
In this section we present proofs for the theorems in Section 5. First, we will look at the safety
results. We need some assumptions for this theorem:
Assumption 1. The function APPROXIMATE returns a sound, nondeterministic approximation of
M in a region reachable from statex0 over a time horizon H. That is, let SR be the set of all states
x for which there exists a sequence of actions under which the system can transition from x0 to x
within H time steps. Then if f = APPROXIMATE (M, x0, u∗
0) then for all x ∈ SR and u ∈ A,
M(x, u) ∈ f(x, u).
Assumption 2. The model learning procedure returns a model which is close to the actual environ-
ment with high probability. That is, if M is a learned environment model then for all x, u,
Prx′∼P(·|x,u) [∥M(x, u) − x′∥ > ε] < δ
Definition 1. A state x0 is said to have realizable safety over a time horizon H if there exists a
sequence of actions u0, . . . ,uH−1 such that, when x0, . . . ,xH is the trajectory unrolled starting
from x0 in the true environment, the formula ϕ inside WPSHIELD (M, xi, π(xi)) is satisfiable for
all i.
Lemma 1. (WPSHIELD is safe under bounded error.) Let H be a time horizon, x0 be a state with
realizable safety over H, M be an environment model, and π be a policy. Choose ε such that for
all states x and actions u, ∥M(x, u) − x′∥ ≤ε where x′ is sampled from the true environment
transition at x and u. For 0 ≤ i < Hlet ui = WPSHIELD (M, xi, π(xi)) and let xi+1 be sampled
from the true environment atxi and ui. Then for 0 ≤ i ≤ H, xi ̸∈ SU .
12
Published as a conference paper at ICLR 2023
Proof. Combining Assumption 1 with condition 1 in the definition of weakest preconditions, we
conclude that for all e ∈ M(xi, ui), WP(ϕi+1, f) = ⇒ ϕi+1[xi+1 7→ e]. Stepping backward
through the loop in W PSHIELD , we find that for all ei ∈ M(xi−1, ui−1) for 1 ≤ i ≤ H, ϕ0 =⇒
ϕH[x1 7→ e1, . . . ,xH 7→ eH]. Because ϕH asserts the safety of the system, we have that ϕ0 also
implies that the system is safe. Then because the actions returned by W PSHIELD are constrained to
satisfy ϕ0, we also have that xi for 0 ≤ i ≤ H are safe.
Theorem 1. (WPSHIELD is probabilistically safe.) For a given state x0 with realizable safety
over a time horizon H, a neural policy π, and an environment model M, let ε and δ be the
error and probability bound of the model as defined in Assumption 2. For 0 ≤ i < H, let
ui = WPSHIELD (M, xi, π(xi)) and let xi+1 be the result of taking action ui at state xi then
with probability at least (1 − δ)i, xi is safe.
Proof. By Lemma 1, if ∥M(x, u) − x′∥ ≤ε then xi is safe for all 0 ≤ i ≤ H. By Assumption 2,
∥M(x, u)−x′∥ ≤ε with probability at least 1−δ. Then at each time step, with probability at most
δ the assumption of Lemma 1 is violated. Therefore after i time steps, the probability that Lemma 1
can be applied is at least (1 − δ)i, so xi is safe with probability at least (1 − δ)i.
In order to establish a regret bound, we will analyze Algorithm 1 as a functional mirror descent in
the policy space. In this view, we assume the existence of a class G of safe policies, a class F ⊇ G
of neural policies, and a class H of mixed, neurosymbolic policies.
We define a safety indicator Z which is one whenever W PSHIELD (M, x, π(x)) = π(x) and zero
otherwise. We will need a number of additional assumptions:
1. H is a vector space equipped with an inner product⟨·, ·⟩ and induced norm∥π∥ =
p
⟨π, π⟩;
2. The long-term reward R is LR-Lipschitz;
3. F is a convex function on H, and ∇F is LF -Lipschitz continuous on H;
4. H is bounded (i.e., sup{∥π − π′∥ |π, π′ ∈ H}< ∞);
5. E[1 − Z] ≤ ζ, i.e., the probability that the shield modifies the action is bounded above by
ζ;
6. the bias introduced in the sampling process is bounded by β, i.e., ∥E[b∇F | π] −
∇FR(π)∥ ≤β, where b∇F is the estimated gradient;
7. for x ∈ S, u ∈ A, and policy π ∈ H, if π(u | x) > 0 then π(u | x) > δfor some fixed
δ >0;
8. the KL-divergence between the true environment dynamics and the model dynamics are is
bounded by ϵm; and
9. the TV-divergence between the policy used to gather data and the policy being trained is
bounded by ϵπ.
For the following regret bound, we will need three useful lemmas from prior work. These lemmas
are reproduced below for completeness.
Lemma 2. (Janner et al. (2019), Lemma B.3) Let the expected KL-divergence between two tran-
sition distributions be bounded by maxt Ex∼pt
1(x)DKL(p1(x′u | x)∥p2(x′, u | x)) ≤ ϵm and
maxx DTV (π1(u | x)∥π2(u | x)) < ϵπ. Then the difference in returns under dynamics p1 with
policy π1 and p2 with policy π2 is bounded by
|Rp1 (π1) − Rp2 (π2)| ≤2Rγ(ϵπ + ϵm)
(1 − γ)2 + 2Rϵπ
1 − γ = O(ϵπ + ϵm).
Lemma 3. (Anderson et al. (2020), Appendix B) Let D be the diameter of H, i.e., D = sup{∥π −
π′∥ |π, π′ ∈ H}. Then the bias incurred by approximating ∇HR(π) with ∇FR(π) is bounded by
E
h
ˆ∇F | π
i
− ∇HR(π)
= O(β + LRζ)
Lemma 4. (Verma et al. (2019), Theorem 4.1) Letπ1, . . . , πT be a sequence of safe policies returned
by Algorithm 1 (i.e., πi is the result of calling WPSHIELD on the trained policy) and let π∗ be
the optimal safe policy. Letting β and σ2 be bounds on the bias and variance of the gradient
13
Published as a conference paper at ICLR 2023
estimation and let ϵ be a bound on the error incurred due to imprecision in WPSHIELD . Then
letting η =
q
1
σ2
1
T + ϵ
, we have the expected regret overT iterations:
R(π∗) − E
"
1
T
TX
i=1
R(πi)
#
= O
σ
r
1
T + ϵ + β
!
.
Now using Lemma 2, we will bound the gradient bias incurred by using model rollouts rather than
true-environment rollouts.
Lemma 5. For a given policyπ, the bias in the gradient estimate incurred by using the environment
model rather than the true environment is bounded by
ˆ∇FR(π) − ∇HR(π)
= O(ϵm + ϵπ).
Proof. Recall from the policy gradient theorem (Sutton et al., 1999) that
∇FR(π) = Ex∼ρπ,u∼π [∇F log π(u | x)Qπ(x, u)]
where ρπ is the state distribution induced by π and Qπ is the long-term expected reward starting
from state x under action u. By Lemma 2, we have |Qπ(x, u) − ˆQπ(x, u)| ≤O(ϵm + ϵπ) where
ˆQπ is the expected return under the learned environment model. Then because log π(u | x) is
the same regardless of whether we use the environment model or the true environment, we have
∇F log π(u | x) = ˆ∇F log π(u | x) and
ˆ∇FR(π) − ∇FR(π)
=
E
h
ˆ∇F log π(u | x) ˆQπ(x, u)
i
− E[∇F log π(u | x)Qπ(x, u)]
=
E
h
∇F log π(u | x) ˆQπ(x, u)
i
− E[∇F log π(u | x)Qπ(x, u)]
=
E
h
∇F log π(u | x) ˆQπ(x, u) − ∇F log π(u | x)Qπ(x, u)
i
=
E
h
∇F log π(u | x)
ˆQπ(x, u) − Qπ(x, u)
i
Now because we assume π(u | x) > δwhenever π(u | x) > 0, the gradient of the log is bounded
above by a constant. Therefore,
ˆ∇FR(π) − ∇HR(π)
= O(ϵm + ϵπ).
Theorem 2. (SPICE converges to an optimal safe policy.) Let π(i)
S for 1 ≤ i ≤ T be a sequence of
safe policies learned by SPICE (i.e., π(i)
S = λx.WPSHIELD (M, x, π(x))) and let π∗
S be the optimal
safe policy. Let β and σ2 be the bias and variance in the gradient estimate which is incurred due to
sampling. Then setting the learning rate η =
q
1
σ2
1
T + ϵ
, we have the expected regret bound:
R (π∗
S) − E
"
1
T
TX
i=1
R
π(i)
S
#
= O
σ
r
1
T + ϵ + β + LRζ + ϵm + ϵπ
!
Proof. The total bias in gradient estimates is bounded by the sum of (i) the bias incurred by sam-
pling, (ii) the bias incurred by shield interference, and (iii) the bias incurred by using an environment
model rather than the true environment. Part (i) is bounded by assumption, part (ii) is bounded by
Lemma 3, and part (iii) is bounded by Lemma 5. Combining these results, we find that the total bias
in the gradient estimate is O(β + LRζ + ϵm + ϵπ). Plugging this bound into Lemma 4, we reach
the desired result.
14
Published as a conference paper at ICLR 2023
B O VERLAPPING POLYHEDRA
In Section 4.3, we claimed that in many environments, the safe region can be represented by over-
lapping polyhedra. In this section, we formalize the notion of “overlapping” in our context and
explain why many practical environments satisfy this property.
We say two polyhedra “overlap” if their intersection has positive volume. That is, polyhedra p1 and
p2 overlap if µ(p1 ∩ p2) > 0 where µ is the Lebesgue measure.
Often in practical continuous control environments, either this property is satisfied, or it is impossi-
ble to verify any safe trajectories at all. This because in continuous control, the system trajectory is
a path in the state space, and this path has to move between the different polyhedra defining the safe
space. To see how this necessitates our overlapping property, let’s take a look at a few possibilities
for how the path can pass from one polyhedron p1 to a second polyhedron p2. For simplicity, we’ll
assume the polyhedra are closed, but this argument can be extended straightforwardly to open or
partially open polyhedra.
• If the two polyhedra are disconnected, then the system is unable to transition between them
because the system trajectory must define a path in the safe region of the state space. Since
the two sets are disconnected, the path must pass through the unsafe states, and therefore
cannot be safe.
• Suppose the dimension of the state space is n and the intersection of the two polyhedra
is an n − 1 dimensional surface (for example, if the state space is 2D then the polyhedra
intersect in a line segment). In this case, we can add a new polyhedron to the set of safe
polyhedra in order to provide an overlap to both p1 and p2. Specifically, let X be the set
of vertices of p1 ∩ p2. Choose a point x1 in the interior of p1 and a point x2 in the interior
of p2. Now define p′ as the convex hull of X ∪ {x1, x2}. Note that p′ ⊆ p1 ∪ p2, so we
can add p′ to the set of safe polyhedra without changing the safe state space as a whole.
However, p′ overlaps with both p1 and p2, and therefore the modified environment has the
overlapping property.
• Otherwise, p1 ∩ p2 is a lower-dimensional surface. Then for every point x ∈ p1 ∩ p2 and
for every ϵ > 0 there exists an unsafe point x′ such that ∥x − x′∥ < ϵ. In order for the
system to transition from p1 to p2, it must pass through a point which is arbitrarily close to
unsafe. As a result, the system must be arbitrarily fragile — any perturbation can result in
unsafe behavior. Because real-world systems are subject to noise and/or modeling error, it
would be impossible to be sure the system would be safe in this case.
C F URTHER EXPERIMENTAL DATA
In this section we provide further details about our experimental setup and results.
Our experiments are taken from Anderson et al. (2020), and consist of 10 environments with continu-
ous state and action spaces. The mountain-car and pendulum benchmarks are continuous versions of
the corresponding classical control environments. The acc benchmark represents an adaptive cruise
control environment. The remaining benchmarks represent various situations arising from robotics.
See Anderson et al. (2020), Appendix C for a more complete description of each benchmark.
As mentioned in Section 6, our tool is built on top of MBPO (Janner et al., 2019) using
SAC (Haarnoja et al., 2018a) as the underlying learning algorithm. We gather real data for 10
episodes for each model update then collect data from 70 simulated episodes before updating the en-
vironment model again. We look five time steps into the future during safety analysis. Our SAC im-
plementation (adapted from Tandon (2018)) uses automatic entropy tuning as proposed in Haarnoja
et al. (2018b). To compare with CPO we use the original implementation from Achiam et al. (2017).
Each training process is cut off after 48 hours. We train each benchmark starting from nine distinct
seeds.
Because the code for Bharadhwaj et al. (2021) is not available, we use a modified version of our
code for comparison, which we label CSC-MBPO. Our implementation follows Algorithm 1 except
that WPSHIELD is replaced by an alternative shielding framework. This framework learns a neural
safety signal using conservative Q-learning and then resamples actions from the policy until a safe
action is chosen, as described in Bharadhwaj et al. (2021). We chose this implementation in order to
15
Published as a conference paper at ICLR 2023
(a) acc
(b) mountain-car
(c) noisy-road
(d) road
Figure 5: Cumulative safety violations over time.
(a) acc
(b) mountain-car
(c) noisy-road
(d) road
Figure 6: Training curves for SPICE and CPO.
give the fairest possible comparison between S PICE and the conservative safety critic approach, as
the only differences between the two tools in our experiments is the shielding approach. The code
for our tool includes our implementation of CSC-MBPO.
The safety curves for the remaining benchmarks are presented in Figure 5.
Training curves for the remaining benchmarks are presented in Figure 6.
C.1 E XPLORING THE SAFETY HORIZON
As mentioned in Section 6, S PICE relies on choosing a good horizon over which to compute the
weakest precondition. We will now explore this tradeoff in more detail. Safety curves for each
benchmark under several different choices of horizon are presented in Figure 7. The performance
curves for each benchmark are shown in Figure 8.
There are a few notable phenomena shown in these curves. As expected, in most cases using a safety
horizon of one does not give particularly good safety. This is expected because as the safety horizon
16
Published as a conference paper at ICLR 2023
(a) acc
(b) car-racing
(c) mountain-car
(d) noisy-road
(e) noisy-road-2d
(f) obstacle
(g) obstacle2
(h) pendulum
(i) road
(j) road-2d
Figure 7: Safety curves for SPICE using different safety horizons
17
Published as a conference paper at ICLR 2023
becomes very small, it is easy for the system to end up in a state where there are no safe actions.
The obstacle benchmark shows this trend very clearly: as the safety horizon increases, the number
of safety violations decreases.
On the other hand, several benchmarks (e.g., acc, mountain-car, and noisy-road-2d) show a more
interesting dynamic: very large safety horizons also lead to an increase in safety violations. This is a
little less intuitive because as we look farther into the future, we should be able to avoid more unsafe
behaviors. However, in reality there is an explanation for this phenomenon. The imprecision in the
environment model (both due to model learning and due to the call to APPROXIMATE ) accumulates
for each time step we need to look ahead. As a result, large safety horizons lead to a fairly imprecise
analysis. Not only does this interfere with exploration, but it can also lead to an infeasible constraint
set in the shield. (That is, ϕ in Algorithm 2 becomes unsatisfiable.) In this case, the projection in
Algorithm 2 is ill-defined, so S PICE relies on a simplistic backup controller. This controller is not
always able to guarantee safety, leading to an increase in safety violations as the horizon increases.
In practice, we find that a safety horizon of five provides a good amount of safety in most benchmarks
without interfering with training. Smaller or larger values can lead to more safety violations while
also reducing performance a little in some benchmarks. In general, tuning the safety horizon for
each benchmark can yield better results, but for the purposes of this evaluation we have chosen to
use the same horizon throughout.
C.2 Q UALITATIVE EVALUATION
Figure 9 shows trajectories sampled from each tool at various stages of training. Specifically, each
100 episodes during training, 100 trajectories were sampled. The plotted trajectories represent the
worst samples from this set of 100. The environment represents a robot moving in a 2D plane which
must reach the green shaded region while avoiding the red crosshatched region. (Notice that while
the two regions appear to move in the figure, the are actually static. The axes in each part of the
figure change in order to represent the entirety of the trajectories.) From this visualization, we can
see that SPICE is able to quickly find a policy which safely reaches the goal every time. By contrast,
CSC-MBPO requires much more training data to find a good policy and encounters more safety
violations along the way. CPO is also slower to converge and more unsafe than SPICE .
18
Published as a conference paper at ICLR 2023
(a) acc
(b) car-racing
(c) mountain-car
(d) noisy-road
(e) noisy-road-2d
(f) obstacle
(g) obstacle2
(h) pendulum
(i) road
(j) road-2d
Figure 8: Training curves for SPICE using different safety horizons
19
Published as a conference paper at ICLR 2023
Figure 9: Trajectories at various stages of training.
20
|
Greg Anderson, Swarat Chaudhuri, Isil Dillig
|
Accept: poster
| 2,023
|
{"id": "zzqBoIFOQ1", "original": "dC_9j4aLwcA", "cdate": 1663850184200, "pdate": 1675279800000, "odate": 1664468100000, "mdate": null, "tcdate": 1663850184200, "tmdate": 1767109375406, "ddate": null, "number": 3283, "content": {"title": "Guiding Safe Exploration with Weakest Preconditions", "authorids": ["~Greg_Anderson1", "~Swarat_Chaudhuri1", "~Isil_Dillig1"], "authors": ["Greg Anderson", "Swarat Chaudhuri", "Isil Dillig"], "keywords": ["reinforcement learning", "safe learning", "safe exploration"], "TL;DR": "We use an online, weakest-precondition-based approach to ensure safety during exploration without interfering with performance.", "abstract": "In reinforcement learning for safety-critical settings, it is often desirable for the agent to obey safety constraints at all points in time, including during training. We present a novel neurosymbolic approach called SPICE to solve this safe exploration problem. SPICE uses an online shielding layer based on symbolic weakest preconditions to achieve a more precise safety analysis than existing tools without unduly impacting the training process. We evaluate the approach on a suite of continuous control benchmarks and show that it can achieve comparable performance to existing safe learning techniques while incurring fewer safety violations. Additionally, we present theoretical results showing that SPICE converges to the optimal safe policy under reasonable assumptions.", "anonymous_url": "I certify that there is no URL (e.g., github page) that could be used to find authors\u2019 identity.", "no_acknowledgement_section": "I certify that there is no acknowledgement section in this submission for double blind review.", "code_of_ethics": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics", "submission_guidelines": "Yes", "resubmission": "", "student_author": "", "Please_choose_the_closest_area_that_your_submission_falls_into": "Reinforcement Learning (eg, decision and control, planning, hierarchical RL, robotics)", "paperhash": "anderson|guiding_safe_exploration_with_weakest_preconditions", "pdf": "/pdf/31fba2ce53b7314f1c3b3ec7719c818d82414a6d.pdf", "supplementary_material": "/attachment/ec3f34f04957b594c28e243d8aa5403d15fd76b5.zip", "_bibtex": "@inproceedings{\nanderson2023guiding,\ntitle={Guiding Safe Exploration with Weakest Preconditions},\nauthor={Greg Anderson and Swarat Chaudhuri and Isil Dillig},\nbooktitle={The Eleventh International Conference on Learning Representations },\nyear={2023},\nurl={https://openreview.net/forum?id=zzqBoIFOQ1}\n}", "venue": "ICLR 2023 poster", "venueid": "ICLR.cc/2023/Conference", "community_implementations": "[ 1 code implementation](https://www.catalyzex.com/paper/guiding-safe-exploration-with-weakest/code)"}, "forum": "zzqBoIFOQ1", "referent": null, "invitation": "ICLR.cc/2023/Conference/-/Blind_Submission", "replyto": null, "readers": ["everyone"], "nonreaders": [], "signatures": ["ICLR.cc/2023/Conference"], "writers": ["ICLR.cc/2023/Conference"]}
|
ICLR.cc/2023/Conference/Paper3283/Reviewer_euEh
| null |
4
|
{"id": "d3-kXa-Agu", "original": null, "cdate": 1667459319188, "pdate": null, "odate": null, "mdate": null, "tcdate": 1667459319188, "tmdate": 1670985996556, "ddate": null, "number": 4, "content": {"confidence": "4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.", "summary_of_the_paper": "The paper proposes a safe RL algorithm, Symbolic Preconditions for Constrained Exploration (SPICE), that uses a model-based shielding mechanism to produce safe actions. A linearization of the model and polyhedral state constraints are used to determine which actions are safe, and then the proposed action is projected onto the space of safe actions. The paper provides a regret bound for the algorithm and demonstrates that SPICE can attain fewer safety violations during training than model-free baselines.", "strength_and_weaknesses": "Strengths:\n* SPICE\u2019s use of formal methods provides strong safety guarantees if the assumptions hold. This is very desirable in safety critical applications.\n* In experiments, SPICE substantially reduces the number of violations substantially compared to CPO and CSC.\n\nWeaknesses:\n* A footnote states \"We use a modified version of our approach instead of comparing to Bharadhwaj et al. (2021) directly because the code for that paper is unavailable.\" While this is reasonable, the present paper does not provide a description of your modifications. It is difficult for the reader to compare the performance of SPICE vs. \u201cCSC\u201d without knowing exactly what \u201cCSC\u201d means.\n* From Figure 3, the variance of the policy\u2019s performance appears extremely high for both SPICE and CSC. The plots in the CSC paper look much less noisy. This raises questions about the quality of the implementation.\n* SPICE converges to a suboptimal policy (worse than CPO) in many cases.\n* The experiments do not compare to any other model-based safe RL algorithms.\n* Linearization of the model may limit what types of problems SPICE can solve effectively.", "clarity,_quality,_novelty_and_reproducibility": "The paper is generally clear. I appreciate the use of examples to demonstrate the approach.\nHowever, the statement of theorem 2 should be improved:\n* Some of the symbols that appear in the regret bound are only defined in the appendix ($L_R$, $\\sigma$) or in the following text ($\\zeta$).\n* To be precise, $\\epsilon_m$ and $\\epsilon_\\pi$ are upper bounds on the divergences that hold for all $T$. (This is stated in the appendix, but not in the main text.) The actual divergences are changing throughout training as the model and policy are updated.\n\nThe proposed approach is novel, to my knowledge.\n\nRegarding reproducibility, the lack of details regarding the CSC implementation is a significant concern.\n\nThe paper is missing some relevant references to model-based safe RL papers, such as\n* Safe Reinforcement Learning Using Robust MPC. M. Zanon, S. Gros\n* Safe Reinforcement Learning by Imagining the Near Future. G. Thomas, Y. Luo, T. Ma\n", "summary_of_the_review": "I like the proposed approach and its associated guarantees. My main issues are in the experiments, as detailed above. The state of the experiments makes it hard to recommend acceptance in the paper\u2019s current form, in spite of other positive attributes of the paper.", "correctness": "4: All of the claims and statements are well-supported and correct.", "technical_novelty_and_significance": "3: The contributions are significant and somewhat new. Aspects of the contributions exist in prior work.", "empirical_novelty_and_significance": "2: The contributions are only marginally significant or novel.", "flag_for_ethics_review": ["NO."], "recommendation": "6: marginally above the acceptance threshold"}, "forum": "zzqBoIFOQ1", "referent": null, "invitation": "ICLR.cc/2023/Conference/Paper3283/-/Official_Review", "replyto": "zzqBoIFOQ1", "readers": ["everyone"], "nonreaders": [], "signatures": ["ICLR.cc/2023/Conference/Paper3283/Reviewer_euEh"], "writers": ["ICLR.cc/2023/Conference", "ICLR.cc/2023/Conference/Paper3283/Reviewer_euEh"]}
|
{
"criticism": 0,
"example": 0,
"importance_and_relevance": 0,
"materials_and_methods": 0,
"praise": 0,
"presentation_and_reporting": 0,
"results_and_discussion": 0,
"suggestion_and_solution": 0,
"total": 0
}
| -10
| -2.667231
| 2.667231
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
{
"criticism": 0,
"example": 0,
"importance_and_relevance": 0,
"materials_and_methods": 0,
"praise": 0,
"presentation_and_reporting": 0,
"results_and_discussion": 0,
"suggestion_and_solution": 0
}
| -10
|
iclr2023
|
openreview
| 0
| 0
| 0
| null |
|||
zzqBoIFOQ1
|
Guiding Safe Exploration with Weakest Preconditions
|
In reinforcement learning for safety-critical settings, it is often desirable for the agent to obey safety constraints at all points in time, including during training. We present a novel neurosymbolic approach called SPICE to solve this safe exploration problem. SPICE uses an online shielding layer based on symbolic weakest preconditions to achieve a more precise safety analysis than existing tools without unduly impacting the training process. We evaluate the approach on a suite of continuous control benchmarks and show that it can achieve comparable performance to existing safe learning techniques while incurring fewer safety violations. Additionally, we present theoretical results showing that SPICE converges to the optimal safe policy under reasonable assumptions.
|
Published as a conference paper at ICLR 2023
GUIDING SAFE EXPLORATION WITH
WEAKEST PRECONDITIONS
Greg Anderson, Swarat Chaudhuri ∗, Isil Dillig ∗,
Department of Computer Science
The University of Texas at Austin
Austin, TX, USA
{ganderso, swarat, isil}@cs.utexas.edu
ABSTRACT
In reinforcement learning for safety-critical settings, it is often desirable for the
agent to obey safety constraints at all points in time, including during training.
We present a novel neurosymbolic approach called SPICE to solve this safe explo-
ration problem. S PICE uses an online shielding layer based on symbolic weakest
preconditions to achieve a more precise safety analysis than existing tools without
unduly impacting the training process. We evaluate the approach on a suite of
continuous control benchmarks and show that it can achieve comparable perfor-
mance to existing safe learning techniques while incurring fewer safety violations.
Additionally, we present theoretical results showing that S PICE converges to the
optimal safe policy under reasonable assumptions.
1 I NTRODUCTION
In many real-world applications of reinforcement learning (RL), it is crucial for the agent to behave
safely during training. Over the years, a body of safe exploration techniques (Garcıa & Fern´andez,
2015) has emerged to address this challenge. Broadly, these methods aim to converge to high-
performance policies while ensuring that every intermediate policy seen during learning satisfies
a set of safety constraints. Recent work has developed neural versions of these methods (Achiam
et al., 2017; Dalal et al., 2018; Bharadhwaj et al., 2021) that can handle continuous state spaces and
complex policy classes.
Any method for safe exploration needs a mechanism for deciding if an action can be safely ex-
ecuted at a given state. Some existing approaches use prior knowledge about system dynamics
(Berkenkamp et al., 2017; Anderson et al., 2020) to make such judgments. A more broadly applica-
ble class of methods make these decisions using learned predictors represented as neural networks.
For example, such a predictor can be a learned advantage function over the constraints (Achiam
et al., 2017; Yang et al., 2020) or a critic network (Bharadhwaj et al., 2021; Dalal et al., 2018) that
predicts the safety implications of an action.
However, neural predictors of safety can require numerous potentially-unsafe environment interac-
tions for training and also suffer from approximation errors. Both traits are problematic in safety-
critical, real-world settings. In this paper, we introduce a neurosymbolic approach to learning safety
predictors that is designed to alleviate these difficulties.
Our approach, called SPICE 1, is similar to Bharadhwaj et al. (2021) in that we use a learned model to
filter out unsafe actions. However, the novel idea in SPICE is to use the symbolic method of weakest
preconditions (Dijkstra, 1976) to compute, from a single-time-step environment model, a predicate
that decides if a given sequence of future actions is safe. Using this predicate, we symbolically
compute a safety shield (Alshiekh et al., 2018) that intervenes whenever the current policy proposes
an unsafe action. The environment model is repeatedly updated during the learning process using
data safely collected using the shield. The computation of the weakest precondition and the shield
is repeated, leading to a more refined shield, on each such update.
∗equal advising
1SPICE is available at https://github.com/gavlegoat/spice.
1
Published as a conference paper at ICLR 2023
The benefit of this approach is sample-efficiency: to construct a safety shield for the next k time
steps, SPICE only needs enough data to learn a single-step environment model. We show this benefit
using an implementation of the method in which the environment model is given by a piecewise
linear function and the shield is computed through quadratic programming (QP). On a suite of chal-
lenging continuous control benchmarks from prior work, S PICE has comparable performance as
fully neural approaches to safe exploration and incurs far fewer safety violations on average.
In summary, this paper makes the following contributions:
• We present the first neurosymbolic framework for safe exploration with learned models of safety.
• We present a theoretical analysis of the safety and performance of our approach.
• We develop an efficient, QP-based instantiation of the approach and show that it offers greater
safety than end-to-end neural approaches without a significant performance penalty.
2 P RELIMINARIES
Safe Exploration. We formalize safe exploration in terms of a constrained Markov decision
process (CMDP) with a distinguished set of unsafe states. Specifically, a CMDP is a structure
M = (S, A, r, P, p0, c) where S is the set of states, A is the set of actions, r : S × A →R is a
reward function, P(x′ | x, u), where x, x′ ∈ Sand u ∈ A, is a probabilistic transition function,
p0 is an initial distribution over states, and c is a cost signal. Following prior work (Bharadhwaj
et al., 2021), we consider the case where the cost signal is a boolean indicator of failure, and we
further assume that the cost signal is defined by a set of unsafe states SU . That is, c(x) = 1 if
x ∈ SU and c(x) = 0 otherwise. A policy is a stochastic function π mapping states to distribu-
tions over actions. A policy, in interaction with the environment, generates trajectories (or rollouts)
x0, u0, x1, u1, . . . ,un−1, xn where x0 ∼ p0, each ui ∼ π(xi), and each xi+1 ∼ P(xi, ui). Con-
sequently, each policy induces probability distributions Sπ and Aπ on the state and action. Given a
discount factor γ <1, the long-term return of a policy π is R(π) = Exi,ui∼π
P
i γir(xi, ui)
.
The goal of standard reinforcement learning is to find a policy π∗ = arg max π R(π). Popu-
lar reinforcement learning algorithms accomplish this goal by developing a sequence of policies
π0, π1, . . . , πN such that πN ≈ π∗. We refer to this sequence of polices as a learning process.
Given a bound δ, the goal of safe exploration is to discover a learning process π0, . . . , πN such that
πN = arg maxπ R(π) and ∀1 ≤ i ≤ N. Px∼Sπi
(x ∈ SU ) < δ
That is, the final policy in the sequence should be optimal in terms of the long-term reward and every
policy in the sequence (except forπ0) should have a bounded probabilityδ of unsafe behavior. Note
that this definition does not place a safety constraint onπ0 because we assume that nothing is known
about the environment a priori.
Weakest Preconditions. Our approach to the safe exploration problem is built on weakest precon-
ditions (Dijkstra, 1976). At a high level, weakest preconditions allow us to “translate” constraints
on a program’s output to constraints on its input. As a very simple example, consider the function
x 7→ x+1. The weakest precondition for this function with respect to the constraintret >0 (where
ret indicates the return value) would be x >−1. In this work, the “program” will be a model of the
environment dynamics, with the inputs being state-action pairs and the outputs being states.
For the purposes of this paper, we present a simplified weakest precondition definition that is tailored
towards our setting. Let f : S × A →2S be a nondeterministic transition function. As we will see
in Section 4, f represents a PAC-style bound on the environment dynamics. We define an alphabet
Σ which consists of a set of symbolic actions ω0, . . . , ωH−1 and states χ0, . . . , χH. Each symbolic
state and action can be thought of as a variable representing an a priori unkonwn state and action.
Let ϕ be a first order formula over Σ. The symbolic states and actions represent a trajectory in the
environment defined by f, so they are linked by the relationχi+1 ∈ f(χi, ωi) for 0 ≤ i < H. Then,
for a given i, the weakest precondition of ϕ is a formula ψ over Σ \ {χi+1} such that (1) for all
e ∈ f(χi, ωi), we have ψ =⇒ ϕ[χi+1 7→ e] and (2) for all ψ′ satisfying condition (1), ψ′ =⇒ ψ.
Here, the notation ϕ[χi+1 7→ e] represents the formula ϕ with all instances of χi+1 replaced by
the expression e. Intuitively, the first condition ensures that, after taking one environment step from
χi under action ωi, the system will always satisfy ϕ, no matter how the nondeterminism of f is
resolved. The second condition ensures that ϕ is as permissive as possible, which prevents us from
ruling out states and actions that are safe in reality.
2
Published as a conference paper at ICLR 2023
3 S YMBOLIC PRECONDITIONS FOR CONSTRAINED EXPLORATION
Algorithm 1 The main learning algorithm
procedure SPICE
Initialize an empty dataset D and random policy π
for epoch in 1 . . . Ndo
if epoch = 1 then
πS ← π
else
πS ← λx.WPSHIELD (M, x, π(x))2
Unroll real trajectores {(si, ai, s′
i, ri)} under πS
D = D ∪ {(si, ai, s′
i, ri)}
M ← LEARN ENVMODEL (D)
Optimize π using the simulated environment M
Our approach, Symbolic Precondi-
tions for Constrained Exploration
(SPICE ), uses a learned environment
model to both improve sample effi-
ciency and support safety analysis at
training time. To do this, we build on
top of model-based policy optimiza-
tion (MBPO) (Janner et al., 2019).
Similar to MBPO, the model in our
approach is used to generate syn-
thetic policy rollout data which can
be fed into a model-free learning al-
gorithm to train a policy. In contrast
to MBPO, we reuse the environment
model to ensure the safety of the sys-
tem. This dual use of the environment allows for both efficient optimization and safe exploration.
The main training procedure is shown in Algorithm 1 and simultaneously learns an environment
model M and the policy π. The algorithm maintains a dataset D of observed environment tran-
sitions, which is obtained by executing the current policy π in the environment. S PICE then uses
this dataset to learn an environment M, which is used to optimize the current policy π, as done in
model-based RL. The key difference of our technique from standard model-based RL is the use of
a shielded policy πS when unrolling trajectories to construct dataset D. This is necessary for safe
exploration because executing π in the real environment could result in safety violations. In contrast
to prior work, the shielded policy πS in our approach is defined by an online weakest precondition
computation which finds a constraint over the action space which symbolically represents all safe
actions. This procedure is described in detail in Section 4.
4 S HIELDING WITH POLYHEDRAL WEAKEST PRECONDITIONS
4.1 O VERVIEW OF SHIELDING APPROACH
Algorithm 2 Shielding a proposed action
procedure WPSHIELD (M, x0, u∗
0)
f ← APPROXIMATE (M, x0, u∗
0)
ϕH ← VH
i=1 χi ∈ S \ SU
for t from H − 1 down to 0 do
ϕt ← WP(ϕt+1, f)
ϕ ← ϕ0[χ0 7→ x0]
(u0, . . . ,uH−1) = arg min
u′
0,...,u′
H−1⊨ϕ
∥u′
0−u∗
0∥2
return u0
Our high-level online intervention approach is
presented in Algorithm 2. Given an environ-
ment model M, the current state x0 and a
proposed action u∗
0, the WPSHIELD procedure
chooses a modified actionu0 which is as simi-
lar as possible to u∗
0 while ensuring safety. We
consider an action to be safe if, after executing
that action in the environment, there exists a
sequence of follow-up actions u1, . . . ,uH−1
which keeps the system away from the unsafe
states over a finite time horizonH. In more de-
tail, our intervention technique works in three
steps:
Approximating the environment. Because computing the weakest precondition of a constraint
with respect to a complex environment model (e.g., deep neural network) is intractable, Algorithm 2
calls the APPROXIMATE procedure to obtain a simpler first-order local Taylor approximation to the
environment model centered at (x0, u∗
0). That is, given the environment model M, it computes
matrices A and B, a vector c, and an error ε such that f(x, u) = Ax + Bu + c + ∆ where ∆ is
an unknown vector with elements in [−ε, ε]. The error term is computed based on a normal Taylor
series analysis such that with high probability, M(x, u) ∈ f(x, u) in a region close to x0 and u∗
0.
Computation of safety constraint.Given a linear approximationf of the environment, Algorithm 2
iterates backwards in time, starting with the safety constraint ϕH at the end of the time horizon H.
2Note that λ is an anonymous function operator rather than a regularization constant.
3
Published as a conference paper at ICLR 2023
In particular, the initial constraint ϕH asserts that all (symbolic) states χ1, . . . , χH reached within
the time horizon are inside the safe region. Then, the loop inside Algorithm 2 uses theWP procedure
(described in the next two subsections) to eliminate one symbolic state at a time from the formulaϕi.
After the loop terminates, all of the state variables except for χ0 have been eliminated from ϕ0, so
ϕ0 is a formula over χ0, ω0, . . . , ωH−1. The next line of Algorithm 2 simply replaces the symbolic
variable χ0 with the current state x0 in order to find a constraint over only the actions.
Projection onto safe space. The final step of the shielding procedure is to find a sequence
u0, . . . ,uH−1 of actions such that (1) ϕ is satisfied and (2) the distance ∥u0 − u∗
0∥ is minimized.
Here, the first condition enforces the safety of the shielded policy, while the second condition ensures
that the shielded policy is as similar as possible to the original one. The notationu0, . . . ,uH−1 ⊨ ϕ
indicates that ϕ is true when the concrete values u0, . . . ,uH−1 are substituted for the symbolic val-
ues ω0, . . . , ωH−1 in ϕ. Thus, the arg min in Algorithm 2 is effectively a projection on the set of
action sequences satisfying ϕ. We discuss this optimization problem in Section 4.4.
4.2 W EAKEST PRECONDITIONS FOR POLYHEDRA
In this section, we describe the WP procedure used in Algorithm 2 for computing the weakest pre-
condition of a safety constraint ϕ with respect to a linear environment model f. To simplify presen-
tation, we assume that the safe space is given as a convex polyhedron — i.e., all safe states satisfy
the linear constraint Px + q ≤ 0. We will show how to relax this restriction in Section 4.3.
Recall that our environment approximation f is a linear function with bounded error, so we have
constraints over the symbolic states and actions:χi+1 = Aχi+Bωi+c+∆ where ∆ is an unknown
vector with elements in[−ε, ε]. In order to compute the weakest precondition of a linear constraintϕ
with respect to f, we simply replace each instance ofχi+1 in ϕ with Aχi +Bωi +c+∆∗ where ∆∗
is the most pessimistic possibility for ∆. Because the safety constraints are linear and the expression
for χi+1 is also linear, this substitution results in a new linear formula which is a conjunction of
constraints of the form wT ν + vT ∆∗ ≤ y. For each element ∆i of ∆, if the coefficient of ∆∗
i is
positive in v, then we choose ∆∗
i = ε. Otherwise, we choose ∆∗
i = −ε. This substitution yields the
maximum value of vT ∆∗ and is therefore the most pessimistic possibility for ∆∗
i .
Figure 1: Weakest precondition example.
Example. We illustrate the weakest precondition
computation through simple example: Consider a
car driving down a (one-dimensional) road whose
goal is to reach the other end of the road as quickly
as possible while obeying a speed limit. The state
of the car is a position x and velocity v. The action
space consists of an acceleration a. Assume there
is bounded noise in the velocity updates so the dy-
namics are x′ = x + 0.1v and v′ = v + 0.1a + ε
where −0.01 ≤ ε ≤ 0.01 and the safety constraint
is v ≤ 1. Suppose the current velocity is v0 = 0.9
and the safety horizon is two. Then, starting with
the safety constraint v1 ≤ 1 ∧ v2 ≤ 1 and step-
ping back through the environment dynamics, we get
the precondition v1 ≤ 1 ∧ v1 + 0.1a1 + ε1 ≤ 1.
Stepping back one more time, we find the condition
v0 + 0.1a0 + ε2 ≤ 1 ∧ v0 + 0.1a0 + 0.1a1 + ε1 + ε2 ≤ 1. Picking the most pessimistic values for
ε1 and ε2 to reach v0 + 0.1a0 + 0.01 ≤ 1 ∧ v0 + 0.1a0 + 0.1a1 + 0.02 ≤ 1. Since v0 is specified,
we can replace v0 with 0.9 to simplify this to a constraint over the two actions a0 and a1, namely
0.91 + 0.1a0 ≤ 1 ∧ 0.92 + 0.1a0 + 0.1a1 ≤ 1. Figure 1 shows this region as the shaded triangle
on the left. Any pair of actions (a0, a1) which lies inside the shaded triangle is guaranteed to satisfy
the safety condition for any possible values of ε1 and ε2.
4.3 E XTENSION TO MORE COMPLEX SAFETY CONSTRAINTS
In this section, we extend our weakest precondition computation technique to the setting where
the safe region consists of unions of convex polyhedra. That is, the state space is represented as
a set of matrices Pi and a set of vectors qi such that S \ SU = SN
i=1 {x ∈ S |Pix + qi ≤ 0}.
4
Published as a conference paper at ICLR 2023
Note that, while individual polyhedra are limited in their expressive power, unions of polyhedra can
approximate reasonable spaces with arbitrary precision. This is because a single polyhedron can
approximate a convex set arbitrarily precisely (Bronshteyn & Ivanov, 1975), so unions of polyhedra
can approximate unions of convex sets.
In this case, the formula ϕH in Algorithm 2 has the form ϕH = VH
j=1
WN
i=1 Piχj + qi ≤ 0.
However, the weakest precondition of a formula of this kind can be difficult to compute. Because
the system may transition between two different polyhedra at each time step, there is a combi-
natorial explosion in the size of the constraint formula, and a corresponding exponential slow-
down in the weakest precondition computation. Therefore, we replace ϕH with an approximation
ϕ′
H = WN
i=1
VH
j=1 Piχj + qi ≤ 0 (that is, we swap the conjunction and the disjunction). Note that
ϕH and ϕ′
H are not equivalent, but ϕ′
H is a stronger formula (i.e., ϕ′
H =⇒ ϕH). Thus, any states
satisfying ϕ′
H are also guaranteed to satisfy ϕH, meaning that they will be safe. More intuitively,
this modification asserts that, not only does the state stay within the safe region at each time step,
but it stays within the same polyhedron at each step within the time horizon.
With this modified formula, we can pull the disjunction outside the weakest precondition, i.e.,
WP
_N
i=1
^H
j=1
Piχj + qi ≤ 0, f
=
_N
i=1
WP
^H
j=1
Piχj + qi ≤ 0, f
.
The conjunctive weakest precondition on the right is of the form described in Section 4.2, so this
computation can be done efficiently. Moreover, the number of disjuncts does not grow as we iterate
through the loop in Algorithm 2. This prevents the weakest precondition formula from growing out
of control, allowing for the overall weakest precondition on ϕ′
H to be computed quickly.
Intuitively, the approximation we make to the formulaϕH does rule out some potentially safe action
sequences. This is because it requires the system to stay within a single polyhedron over the entire
horizon. However, this imprecision can be ameliorated in cases where the different polyhedra com-
prising the state space overlap one another (and that overlap has non-zero volume). In that case, the
overlap between the polyhedra serves as a “transition point,” allowing the system to maintain safety
within one polyhedron until it enters the overlap, and then switch to the other polyhedron in order
to continue its trajectory. A formal development of this property, along with an argument that it is
satisfied in many practical cases, is laid out in Appendix B.
Example. Consider an environment which represents a robot moving in 2D space. The state space
is four-dimensional, consisting of two position elements x and y and two velocity elements vx and
vy. The action space consists of two acceleration terms ax and ay, giving rise to the dynamics
x = x + 0.1vx y = y + 0.1vy
vx = vx + 0.1ax vy = vy + 0.1ay
In this environment, the safe space is x ≥ 2 ∨ y ≤ 1, so that the upper-left part of the state space
is considered unsafe. Choosing a safety horizon of H = 2 , we start with the initial constraint
(x1 ≥ 2 ∨ y1 ≤ 1) ∧ (x1 ≥ 2 ∧ y2 ≤ 1). We transform this formula to the stronger formua
(x1 ≥ 2 ∧ x2 ≥ 2) ∨ (y1 ≤ 1 ∧ y2 ≤ 1). By stepping backwards through the weakest precondition
twice, we obtain the following formula over only the current state and future actions:
(x0 + 0.1vx
0 ≥ 2 ∧ x0 + 0.2vx
0 + 0.01ax
0 ≥ 2) ∨ (y0 + 0.1vy
0 ≤ 1 ∧ y0 + 0.2vy
0 + 0.01ay
0 ≤ 1).
4.4 P ROJECTION ONTO THE WEAKEST PRECONDITION
After applying the ideas from Section 4.3, each piece of the safe space yields a set of linear con-
straints over the action sequence u0, . . . ,uH−1. That is, ϕ from Algorithm 2 has the form
ϕ =
_N
i=1
XH−1
j=0
Gi,juj + hi ≤ 0.
Now, we need to find the action sequence satisfyingϕ for which the first action most closely matches
the proposed actionu∗
0. In order to do this, we can minimize the objective function∥u0 −u∗
0∥2. This
function is quadratic, so we can represent this minimization problem as N quadratic programming
problems. That is, for each polyhedron Pi, qi in the safe region, we solve:
minimize ∥u∗
0 − u0∥2
subject to
XH−1
j=0
Gi,juj + hi ≤ 0
5
Published as a conference paper at ICLR 2023
Such problems can be solved efficiently using existing tools. By applying the same technique inde-
pendently to each piece of the safe state space, we reduce the projection problem to a relatively small
number of calls to a quadratic programming solver. This reduction allows the shielding procedure
to be applied fast enough to generate the amount of data needed for gradient-based learning.
Example: Consider again Figure 1. Suppose the proposed action isu∗
0 = 1, represented by the solid
line in Figure 1. Since the proposed action is outside of the safe region, the projection operation will
find the point inside the safe region that minimizes the distance along the a0 axis only. This leads
to the dashed line in Figure 1, which is the action u0 that is as close as possible to u∗
0 while still
intersecting the safe region represented by the shaded triangle. Therefore, in this case, W PSHIELD
would return 0.8 as the safe action.
5 T HEORETICAL RESULTS
We will now develop theoretical results on the safety and performance of agents trained with SPICE .
For brevity, proofs have been deferred to Appendix A.
For the safety theorem, we will assume the model is approximately accurate with high probability
and that the A PPROXIMATE procedure gives a sound local approximation to the model. Formally,
Prx′∼P(·|x,u)[∥M(x, u) − x′∥ > ε] < δM , and if f = APPROXIMATE (M, x0, u∗
0) then for all
actions u and all states x reachable within H time steps, M(x, u) ∈ f(x, u).
Theorem 1. Let x0 be a safe state and let π be any policy. For 0 ≤ i < H, let ui =
WPSHIELD (M, xi, π(xi)) and let xi+1 be the result of taking action ui at state xi. Then with
probability at least (1 − δM )i, xi is safe.
This theorem shows why SPICE is better able to maintain safety compared to prior work. Intuitively,
constraint violations can only occur in SPICE when the environment model is incorrect. In contrast,
statistical approaches to safe exploration are subject to safety violations caused by either modeling
error or actions which are not safe even with respect to the environment model. Note that for a
safety level δ and horizon H, a modeling error can be computed as δM < 1 −(1 −δ)/ exp(H −1).
The performance analysis is based on treating Algorithm 1 as a functional mirror descent in the
policy space, similar to Verma et al. (2019) and Anderson et al. (2020). We assume a class of
neural policies F, a class of safe policies G, and a joint class H of neurosymbolic policies. We
proceed by considering the shielded policy λx.WPSHIELD (M, x, πN (x)) to be a projection of the
neural policy πN into G for a Bregman divergence DF defined by a function F. We define a safety
indicator Z which is one whenever W PSHIELD (M, x, π(i)(x)) = π(i)(x) and zero otherwise, and
we let ζ = E[1 − Z]. Under reasonable assumptions (see Appendix A for a full discussion), we
prove a regret bound for Algorithm 1.
Theorem 2. Let π(i)
S for 1 ≤ i ≤ T be a sequence of safe policies learned by SPICE (i.e., π(i)
S =
λx.WPSHIELD (M, x, π(x))) and let π∗
S be the optimal safe policy. Additionally we assume the
reward functionR is Lipschitz in the policy space and letLR be the Lipschitz constant ofR, β and σ2
be the bias and variance introduced by sampling in the gradient computation, ϵ be an upper bound
on the bias incurred by using projection onto the weakest precondition to approximate imitation
learning, ϵm be an upper bound the KL divergence between the model and the true environment
dynamics at all time steps, and ϵπ be an upper bound on the TV divergence between the policy
used to gather data and the policy being trained at all time steps. Then setting the learning rate
η =
q
1
σ2
1
T + ϵ
, we have the expected regret bound:
R (π∗
S) − E
1
T
XT
i=1
R
π(i)
S
= O
σ
r
1
T + ϵ + β + LRζ + ϵm + ϵπ
!
This theorem provides a few intuitive results, based on the additive terms in the regret bound. First,
ζ is the frequency with which we intervene in network actions and as ζ decreases, the regret bound
becomes tighter. This fits our intuition that, as the shield intervenes less and less, we approach
standard reinforcement learning. The two terms ϵm and ϵπ are related to how accurately the model
captures the true environment dynamics. As the model becomes more accurate, the policy converges
to better returns. The other terms are related to standard issues in reinforcement learning, namely
the error incurred by using sampling to approximate the gradient.
6
Published as a conference paper at ICLR 2023
(a) car-racing
(b) noisy-road-2d
(c) obstacle
(d) obstacle2
(e) pendulum
(f) road-2d
Figure 2: Cumulative safety violations over time.
6 E XPERIMENTAL EVALUATION
We now turn to a practical evaluation of SPICE . Our implementation of S PICE uses PyEarth (Rudy,
2013) for model learning and CVXOPT (Anderson et al., 2022) for quadratic programming. Our
learning algorithm is based on MBPO (Janner et al., 2019) using Soft Actor-Critic (Haarnoja et al.,
2018a) as the underlying model-free learning algorithm. Our code is adapted from Tandon (2018).
We test S PICE using the benchmarks considered in Anderson et al. (2020). Further details of the
benchmarks and hyperparameters are given in Appendix C.
Benchmark CPO CSC-MBPO S PICE
acc 684 137 286
car-racing 2047 1047 1169
mountain-car 2374 2389 6
noisy-road 0 0 0
noisy-road-2d 286 37 31
obstacle 708 124 2
obstacle2 5592 1773 1861
pendulum 1933 2610 1211
road 0 0 0
road-2d 103 64 41
Average 9.48 3.77 1
Table 1: Safety violations during training.
We compare against two baseline approaches: Con-
strained Policy Optimization (CPO) (Achiam et al.,
2017), a model-free safe learning algorithm, and a
version of our approach which adopts the conserva-
tive safety critic shielding framework from Bharad-
hwaj et al. (2021) (CSC-MBPO). Details of the
CSC-MBPO approach are given in Appendix C. We
additionally tested MPC-RCE (Liu et al., 2020), an-
other model-based safe-learning algorithm, but we
find that it is too inefficient to be run on our bench-
marks. Specifically MPC-RCE was only able to fin-
ish on average 162 episodes within a 2-day time pe-
riod. Therefore, we do not include MPC-RCE in the
results presented in this section.
Safety. First, we evaluate how well our approach
ensures system safety during training. In Table 1,
we present the number of safety violations encountered during training for our baselines. The last
row of the table shows the average increase in the number of safety violations compared to S PICE
(computed as the geometric mean of the ratio of safety violations for each benchmark). This table
shows that SPICE is safer than CPO in every benchmark and achieves, on average, a 89% reduction
in safety violations. CSC-MBPO is substantially safer than CPO, but still not as safe as S PICE . We
achieve a 73% reduction in safety violations on average compared to CSC-MBPO. To give a more
detailed breakdown, Figure 2 shows how the safety violations accumulate over time for several of
our benchmarks. The solid line represents the mean over all trials while the shaded envelope shows
the minimum and maximum values. As can be seen from these figures, CPO starts to accumulate
violations more quickly and continues to violate the safety property more over time than S PICE .
Figures for the remaining benchmarks can be found in Appendix C.
Note that there are a few benchmarks (acc, car-racing, and obstacle2) where S PICE incurs more
violations than CSC-MBPO. There are two potential reasons for this increase. First, S PICE relies
7
Published as a conference paper at ICLR 2023
(a) car-racing
(b) noisy-road-2d
(c) obstacle
(d) obstacle2
(e) pendulum
(f) road-2d
Figure 3: Training curves for SPICE and CPO.
on choosing a model class through which to compute weakest preconditions (i.e., we need to fix an
APPROXIMATE function in Algorithm 2). For these experiments, we use a linear approximation,
but this can introduce a lot of approximation error. A more complex model class allowing a more
precise weakest precondition computation may help to reduce safety violations. Second, SPICE uses
a bounded-time analysis to determine whether a safety violation can occur within the next few time
steps. By contrast, CSC-MBPO uses a neural model to predict the long-term safety of an action. As
a result, actions which result in safety violations far into the future may be easier to intercept using
the CSC-MBPO approach. Given that S PICE achieves much lower safety violations on average, we
think these trade-offs are desirable in many situations.
Figure 4: Trajectories early in training.
Performance. We also test the performance of the
learned policies on each benchmark in order to under-
stand what impact our safety techniques have on model
learning. Figure 3 show the average return over time for
SPICE and the baselines. These curves show that in most
cases SPICE achieves a performance close to that of CPO,
and about the same as CSC-MBPO. We believe that the
relatively modest performance penalty incurred by SPICE
is an acceptable trade-off in some safety-critical systems
given the massive improvement in safety. Further results
are presented in Appendix C.
Qualitative Evaluation. Figure 4 shows the trajectories
of policies learned by SPICE , CPO, and CSC-MBPO part-
way through training (after 300 episodes). In this figure, the agent controls a robot moving on a 2D
plane which must reach the green shaded area while avoiding the red cross-hatched area. For each
algorithm, we sampled 100 trajectories and plotted the worst one. Note that, at this point during
training, both CPO and CSC-MBPO behave unsafely, while the even worst trajectory sampled un-
der SPICE was still safe. See Appendix C for a more complete discussion of how these trajectories
evolve over time during training.
7 R ELATED WORK
Existing work in safe reinforcement learning can be categorized by the kinds of guarantees it pro-
vides: statistical approaches bound the probability that a violation will occur, while worst-case
analyses prove that a policy can never reach an unsafe state. S PICE is, strictly speaking, a statis-
tical approach—without a predefined environment model, we cannot guarantee that the agent will
8
Published as a conference paper at ICLR 2023
never behave unsafely. However, as we show experimentally, our approach is substantially safer in
practice than existing safe learning approaches based on learned cost models.
Statistical Approaches. Many approaches to the safe reinforcement learning problem provide sta-
tistical bounds on the system safety (Achiam et al., 2017; Liu et al., 2020; Yang et al., 2020; Ma
et al., 2021; Zhang et al., 2020; Satija et al., 2020). These approaches maintain an environment
model and then use a variety of statistical techniques to generate a policy which is likely to be safe
with respect to the environment model. This leads to two potential sources of unsafe behavior: the
policy may be unsafe with respect to the model, or the model may be an inaccurate representation
of the environment. Compared to these approaches, we eliminate the first source of error by always
generating policies that are guaranteed to be safe with respect to an environment model. We show
in our experiments that this drastically reduces the number of safety violations encountered in prac-
tice. Some techniques use a learned model together with a linearized cost model to provide bounds,
similar to our approach (Dalal et al., 2018; Li et al., 2021). However, in these works, the system
only looks ahead one time step and relies on the assumption that the cost signal cannot have too
much inertia. Our work alleviates this problem by providing a way to look ahead several time steps
to achieve a more precise safety analysis.
A subset of the statistical approaches are tools that maintain neural models of the cost function in
order to intervene in unsafe behavior (Bharadhwaj et al., 2021; Yang et al., 2021; Yu et al., 2022).
These approaches maintain a critic network which represents the long-term cost of taking a particular
action in a particular state. However, because of the amount of data needed to train neural networks
accurately, these approaches suffer from a need to collect data in several unsafe trajectories in order
to build the cost model. Our symbolic approach is more data-efficient, allowing the system to avoid
safety violations more often in practice. This is borne out by the experiments in Section 6.
Worst-Case Approaches. Several existing techniques for safe reinforcement learning provide for-
mally verified guarantees with respect to a worst-case environment model, either during training
(Anderson et al., 2020) or at convergence (Alshiekh et al., 2018; Bacci et al., 2021; Bastani et al.,
2018; Fulton & Platzer, 2019; Gillula & Tomlin, 2012; Zhu et al., 2019). An alternative class of
approaches uses either a nominal environment model (Koller et al., 2018; Fisac et al., 2019) or
a user-provided safe policy as a starting point for safe learning (Chow et al., 2018; Cheng et al.,
2019). In both cases, these techniques require a predefined model of the dynamics of the environ-
ment. In contrast, our technique does not require the user to specify any model of the environment,
so it can be applied to a much broader set of problems.
8 C ONCLUSION
SPICE is a new approach to safe exploration that combines the advantages of gradient-based learn-
ing with symbolic reasoning about safety. In contrast to prior work on formally verified exploration
(Anderson et al., 2020), SPICE can be used without a precise, handwritten specification of the envi-
ronment behavior. The linchpin of our approach is a new policy intervention which can efficiently
intercept unsafe actions and replace them with actions which are as similar as possible, but prov-
ably safe. This intervention method is fast enough to support the data demands of gradient-based
reinforcement learning and precise enough to allow the agent to explore effectively.
There are a few limitations to S PICE . Most importantly, because the interventions are based on a
linearized environment model, they are only accurate in a relatively small region near the current
system state. This in turn limits the time horizons which can be considered in the safety analysis,
and therefore the strength of the safety properties. Our experiments show that S PICE is still able
to achieve good empirical safety in this setting, but a more advanced policy intervention that can
handle more complex environment models could further improve these results. Additionally, SPICE
can make no safety guarantees about the initial policy used to construct the first environment model,
since there is no model to verify against at the time that that policy is executed. This issue could
be alleviated by assuming a conservative initial model which is refined over time, and there is an
interesting opportunity for future work combining partial domain expertise with learned dynamics.
9
Published as a conference paper at ICLR 2023
FUNDING ACKNOWLEDGEMENTS
This work was supported in part by the United States Air Force and DARPA under Contract No.
FA8750-20-C-0002, by ONR under Award No. N00014-20-1-2115, and by NSF under grants CCF-
1901376 and CCF-1918889. Compute resources for the experiments were provided by the Texas
Advanced Computing Center.
REFERENCES
Joshua Achiam, David Held, Aviv Tamar, and Pieter Abbeel. Constrained policy optimization. In
Proceedings of the 34th International Conference on Machine Learning - Volume 70 , ICML’17,
pp. 22–31. JMLR.org, 2017.
Mohammed Alshiekh, Roderick Bloem, R ¨udiger Ehlers, Bettina K ¨onighofer, Scott Niekum, and
Ufuk Topcu. Safe reinforcement learning via shielding. In Sheila A. McIlraith and Kil-
ian Q. Weinberger (eds.), Proceedings of the Thirty-Second AAAI Conference on Artificial In-
telligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and
the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New
Orleans, Louisiana, USA, February 2-7, 2018 , pp. 2669–2678. AAAI Press, 2018. URL
https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/view/17211.
Greg Anderson, Abhinav Verma, Isil Dillig, and Swarat Chaudhuri. Neurosymbolic reinforcement
learning with formally verified exploration. In H. Larochelle, M. Ranzato, R. Hadsell, M.F.
Balcan, and H. Lin (eds.), Advances in Neural Information Processing Systems , volume 33, pp.
6172–6183. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/
paper/2020/file/448d5eda79895153938a8431919f4c9f-Paper.pdf.
Martin Anderson, Joachim Dahl, and Lieven Vandenberghe. CVXOPT. https://github.
com/cvxopt/cvxopt, 2022.
Edoardo Bacci, Mirco Giacobbe, and David Parker. Verifying reinforcement learning up to infinity.
In Zhi-Hua Zhou (ed.), Proceedings of the Thirtieth International Joint Conference on Artificial
Intelligence, IJCAI-21, pp. 2154–2160. International Joint Conferences on Artificial Intelligence
Organization, 8 2021. doi: 10.24963/ijcai.2021/297. URL https://doi.org/10.24963/
ijcai.2021/297. Main Track.
Osbert Bastani, Yewen Pu, and Armando Solar-Lezama. Verifiable reinforcement learning via policy
extraction. In Advances in Neural Information Processing Systems, pp. 2494–2504, 2018.
Felix Berkenkamp, Matteo Turchetta, Angela Schoellig, and Andreas Krause. Safe model-based
reinforcement learning with stability guarantees. Advances in neural information processing sys-
tems, 30, 2017.
Homanga Bharadhwaj, Aviral Kumar, Nicholas Rhinehart, Sergey Levine, Florian Shkurti, and Ani-
mesh Garg. Conservative safety critics for exploration. In International Conference on Learning
Representations, 2021. URL https://openreview.net/forum?id=iaO86DUuKi.
E. M. Bronshteyn and L. D. Ivanov. The approximation of of convex sets by polyhedra.Sib Math J,
16(5):852–853, September-October 1975. doi: 10.1007/BF00967115.
Richard Cheng, G ´abor Orosz, Richard M. Murray, and Joel W. Burdick. End-to-end safe re-
inforcement learning through barrier functions for safety-critical continuous control tasks. In
Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence , AAAI’19. AAAI
Press, 2019. ISBN 978-1-57735-809-1. doi: 10.1609/aaai.v33i01.33013387. URL https:
//doi.org/10.1609/aaai.v33i01.33013387.
Yinlam Chow, Ofir Nachum, Edgar Duenez-Guzman, and Mohammad Ghavamzadeh. A lyapunov-
based approach to safe reinforcement learning. In Proceedings of the 32nd International Con-
ference on Neural Information Processing Systems, NeurIPS’18, pp. 8103–8112, Red Hook, NY ,
USA, 2018. Curran Associates Inc.
10
Published as a conference paper at ICLR 2023
Gal Dalal, Krishnamurthy Dvijotham, Matej Vecerik, Todd Hester, Cosmin Paduraru, and Yuval
Tassa. Safe exploration in continuous action spaces. arXiv preprint arXiv:1801.08757, 2018.
Edsger Wybe Dijkstra. A discipline of programming, volume 613924118. prentice-hall Englewood
Cliffs, 1976.
Jaime F. Fisac, Anayo K. Akametalu, Melanie N. Zeilinger, Shahab Kaynama, Jeremy Gillula, and
Claire J. Tomlin. A general safety framework for learning-based control in uncertain robotic
systems. IEEE Transactions on Automatic Control, 64(7):2737–2752, 2019. doi: 10.1109/TAC.
2018.2876389.
Nathan Fulton and Andr´e Platzer. Verifiably safe off-model reinforcement learning. InInternational
Conference on Tools and Algorithms for the Construction and Analysis of Systems, pp. 413–430.
Springer, 2019.
Javier Garcıa and Fernando Fern ´andez. A comprehensive survey on safe reinforcement learning.
Journal of Machine Learning Research, 16(1):1437–1480, 2015.
Jeremy H. Gillula and Claire J. Tomlin. Guaranteed safe online learning via reachability: tracking a
ground target using a quadrotor. In IEEE International Conference on Robotics and Automation,
ICRA 2012, 14-18 May, 2012, St. Paul, Minnesota, USA , pp. 2723–2730, 2012. doi: 10.1109/
ICRA.2012.6225136. URL https://doi.org/10.1109/ICRA.2012.6225136.
Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. Soft actor-critic: Off-policy
maximum entropy deep reinforcement learning with a stochastic actor. In Jennifer Dy and An-
dreas Krause (eds.), Proceedings of the 35th International Conference on Machine Learning ,
volume 80 of Proceedings of Machine Learning Research , pp. 1861–1870. PMLR, 10–15 Jul
2018a. URL https://proceedings.mlr.press/v80/haarnoja18b.html.
Tuomas Haarnoja, Aurick Zhou, Kristian Hartikainen, George Tucker, Sehoon Ha, Jie Tan, Vikash
Kumar, Henry Zhu, Abhishek Gupta, Pieter Abbeel, and Sergey Levine. Soft actor-critic algo-
rithms and applications, 2018b. URL https://arxiv.org/abs/1812.05905.
Michael Janner, Justin Fu, Marvin Zhang, and Sergey Levine. When to trust your model: Model-
based policy optimization. In Proceedings of the 33rd International Conference on Neural Infor-
mation Processing Systems, Red Hook, NY , USA, 2019. Curran Associates Inc.
Torsten Koller, Felix Berkenkamp, Matteo Turchetta, and Andreas Krause. Learning-based model
predictive control for safe exploration. In2018 IEEE Conference on Decision and Control (CDC),
pp. 6059–6066, 2018. doi: 10.1109/CDC.2018.8619572.
Yutong Li, Nan Li, H. Eric Tseng, Anouck Girard, Dimitar Filev, and Ilya Kolmanovsky. Safe
reinforcement learning using robust action governor. In Ali Jadbabaie, John Lygeros, George J.
Pappas, Pablo A. Parrilo, Benjamin Recht, Claire J. Tomlin, and Melanie N. Zeilinger (eds.),
Proceedings of the 3rd Conference on Learning for Dynamics and Control , volume 144 of Pro-
ceedings of Machine Learning Research , pp. 1093–1104. PMLR, 07 – 08 June 2021. URL
https://proceedings.mlr.press/v144/li21b.html.
Zuxin Liu, Hongyi Zhou, Baiming Chen, Sicheng Zhong, Martial Hebert, and Ding Zhao. Con-
strained model-based reinforcement learning with robust cross-entropy method, 2020.
Yecheng Jason Ma, Andrew Shen, Osbert Bastani, and Dinesh Jayaraman. Conservative and adap-
tive penalty for model-based safe reinforcement learning, 2021. URL https://arxiv.org/
abs/2112.07701.
Jason Rudy. PyEarth. https://github.com/scikit-learn-contrib/py-earth ,
2013.
Harsh Satija, Philip Amortila, and Joelle Pineau. Constrained markov decision processes via back-
ward value functions. In Proceedings of the 37th International Conference on Machine Learning,
ICML’20. JMLR.org, 2020.
11
Published as a conference paper at ICLR 2023
Richard S. Sutton, David McAllester, Satinder Singh, and Yishay Mansour. Policy gradient methods
for reinforcement learning with function approximation. In Proceedings of the 12th International
Conference on Neural Information Processing Systems , NIPS’99, pp. 1057–1063, Cambridge,
MA, USA, 1999. MIT Press.
Pranjal Tandon. PyTorch Soft Actor-Critic. https://github.com/pranz24/
pytorch-soft-actor-critic , 2018.
Abhinav Verma, Hoang Le, Yisong Yue, and Swarat Chaudhuri. Imitation-projected programmatic
reinforcement learning. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alch ´e-Buc, E. Fox,
and R. Garnett (eds.), Advances in Neural Information Processing Systems , volume 32. Cur-
ran Associates, Inc., 2019. URL https://proceedings.neurips.cc/paper/2019/
file/5a44a53b7d26bb1e54c05222f186dcfb-Paper.pdf.
Qisong Yang, Thiago D. Sim ˜ao, Simon H Tindemans, and Matthijs T. J. Spaan. Wcsac: Worst-
case soft actor critic for safety-constrained reinforcement learning. Proceedings of the AAAI
Conference on Artificial Intelligence, 35(12):10639–10646, May 2021. URL https://ojs.
aaai.org/index.php/AAAI/article/view/17272.
Tsung-Yen Yang, Justinian Rosca, Karthik Narasimhan, and Peter J Ramadge. Projection-based
constrained policy optimization. arXiv preprint arXiv:2010.03152, 2020.
Dongjie Yu, Haitong Ma, Shengbo Li, and Jianyu Chen. Reachability constrained reinforcement
learning. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and
Sivan Sabato (eds.), Proceedings of the 39th International Conference on Machine Learning ,
volume 162 of Proceedings of Machine Learning Research, pp. 25636–25655. PMLR, 17–23 Jul
2022. URL https://proceedings.mlr.press/v162/yu22d.html.
Yiming Zhang, Quan Vuong, and Keith Ross. First order constrained optimization in pol-
icy space. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (eds.), Ad-
vances in Neural Information Processing Systems , volume 33, pp. 15338–15349. Curran As-
sociates, Inc., 2020. URL https://proceedings.neurips.cc/paper/2020/file/
af5d5ef24881f3c3049a7b9bfe74d58b-Paper.pdf.
He Zhu, Zikang Xiong, Stephen Magill, and Suresh Jagannathan. An inductive synthesis framework
for verifiable reinforcement learning. InACM Conference on Programming Language Design and
Implementation (SIGPLAN), 2019.
A P ROOFS OF THEOREMS
In this section we present proofs for the theorems in Section 5. First, we will look at the safety
results. We need some assumptions for this theorem:
Assumption 1. The function APPROXIMATE returns a sound, nondeterministic approximation of
M in a region reachable from statex0 over a time horizon H. That is, let SR be the set of all states
x for which there exists a sequence of actions under which the system can transition from x0 to x
within H time steps. Then if f = APPROXIMATE (M, x0, u∗
0) then for all x ∈ SR and u ∈ A,
M(x, u) ∈ f(x, u).
Assumption 2. The model learning procedure returns a model which is close to the actual environ-
ment with high probability. That is, if M is a learned environment model then for all x, u,
Prx′∼P(·|x,u) [∥M(x, u) − x′∥ > ε] < δ
Definition 1. A state x0 is said to have realizable safety over a time horizon H if there exists a
sequence of actions u0, . . . ,uH−1 such that, when x0, . . . ,xH is the trajectory unrolled starting
from x0 in the true environment, the formula ϕ inside WPSHIELD (M, xi, π(xi)) is satisfiable for
all i.
Lemma 1. (WPSHIELD is safe under bounded error.) Let H be a time horizon, x0 be a state with
realizable safety over H, M be an environment model, and π be a policy. Choose ε such that for
all states x and actions u, ∥M(x, u) − x′∥ ≤ε where x′ is sampled from the true environment
transition at x and u. For 0 ≤ i < Hlet ui = WPSHIELD (M, xi, π(xi)) and let xi+1 be sampled
from the true environment atxi and ui. Then for 0 ≤ i ≤ H, xi ̸∈ SU .
12
Published as a conference paper at ICLR 2023
Proof. Combining Assumption 1 with condition 1 in the definition of weakest preconditions, we
conclude that for all e ∈ M(xi, ui), WP(ϕi+1, f) = ⇒ ϕi+1[xi+1 7→ e]. Stepping backward
through the loop in W PSHIELD , we find that for all ei ∈ M(xi−1, ui−1) for 1 ≤ i ≤ H, ϕ0 =⇒
ϕH[x1 7→ e1, . . . ,xH 7→ eH]. Because ϕH asserts the safety of the system, we have that ϕ0 also
implies that the system is safe. Then because the actions returned by W PSHIELD are constrained to
satisfy ϕ0, we also have that xi for 0 ≤ i ≤ H are safe.
Theorem 1. (WPSHIELD is probabilistically safe.) For a given state x0 with realizable safety
over a time horizon H, a neural policy π, and an environment model M, let ε and δ be the
error and probability bound of the model as defined in Assumption 2. For 0 ≤ i < H, let
ui = WPSHIELD (M, xi, π(xi)) and let xi+1 be the result of taking action ui at state xi then
with probability at least (1 − δ)i, xi is safe.
Proof. By Lemma 1, if ∥M(x, u) − x′∥ ≤ε then xi is safe for all 0 ≤ i ≤ H. By Assumption 2,
∥M(x, u)−x′∥ ≤ε with probability at least 1−δ. Then at each time step, with probability at most
δ the assumption of Lemma 1 is violated. Therefore after i time steps, the probability that Lemma 1
can be applied is at least (1 − δ)i, so xi is safe with probability at least (1 − δ)i.
In order to establish a regret bound, we will analyze Algorithm 1 as a functional mirror descent in
the policy space. In this view, we assume the existence of a class G of safe policies, a class F ⊇ G
of neural policies, and a class H of mixed, neurosymbolic policies.
We define a safety indicator Z which is one whenever W PSHIELD (M, x, π(x)) = π(x) and zero
otherwise. We will need a number of additional assumptions:
1. H is a vector space equipped with an inner product⟨·, ·⟩ and induced norm∥π∥ =
p
⟨π, π⟩;
2. The long-term reward R is LR-Lipschitz;
3. F is a convex function on H, and ∇F is LF -Lipschitz continuous on H;
4. H is bounded (i.e., sup{∥π − π′∥ |π, π′ ∈ H}< ∞);
5. E[1 − Z] ≤ ζ, i.e., the probability that the shield modifies the action is bounded above by
ζ;
6. the bias introduced in the sampling process is bounded by β, i.e., ∥E[b∇F | π] −
∇FR(π)∥ ≤β, where b∇F is the estimated gradient;
7. for x ∈ S, u ∈ A, and policy π ∈ H, if π(u | x) > 0 then π(u | x) > δfor some fixed
δ >0;
8. the KL-divergence between the true environment dynamics and the model dynamics are is
bounded by ϵm; and
9. the TV-divergence between the policy used to gather data and the policy being trained is
bounded by ϵπ.
For the following regret bound, we will need three useful lemmas from prior work. These lemmas
are reproduced below for completeness.
Lemma 2. (Janner et al. (2019), Lemma B.3) Let the expected KL-divergence between two tran-
sition distributions be bounded by maxt Ex∼pt
1(x)DKL(p1(x′u | x)∥p2(x′, u | x)) ≤ ϵm and
maxx DTV (π1(u | x)∥π2(u | x)) < ϵπ. Then the difference in returns under dynamics p1 with
policy π1 and p2 with policy π2 is bounded by
|Rp1 (π1) − Rp2 (π2)| ≤2Rγ(ϵπ + ϵm)
(1 − γ)2 + 2Rϵπ
1 − γ = O(ϵπ + ϵm).
Lemma 3. (Anderson et al. (2020), Appendix B) Let D be the diameter of H, i.e., D = sup{∥π −
π′∥ |π, π′ ∈ H}. Then the bias incurred by approximating ∇HR(π) with ∇FR(π) is bounded by
E
h
ˆ∇F | π
i
− ∇HR(π)
= O(β + LRζ)
Lemma 4. (Verma et al. (2019), Theorem 4.1) Letπ1, . . . , πT be a sequence of safe policies returned
by Algorithm 1 (i.e., πi is the result of calling WPSHIELD on the trained policy) and let π∗ be
the optimal safe policy. Letting β and σ2 be bounds on the bias and variance of the gradient
13
Published as a conference paper at ICLR 2023
estimation and let ϵ be a bound on the error incurred due to imprecision in WPSHIELD . Then
letting η =
q
1
σ2
1
T + ϵ
, we have the expected regret overT iterations:
R(π∗) − E
"
1
T
TX
i=1
R(πi)
#
= O
σ
r
1
T + ϵ + β
!
.
Now using Lemma 2, we will bound the gradient bias incurred by using model rollouts rather than
true-environment rollouts.
Lemma 5. For a given policyπ, the bias in the gradient estimate incurred by using the environment
model rather than the true environment is bounded by
ˆ∇FR(π) − ∇HR(π)
= O(ϵm + ϵπ).
Proof. Recall from the policy gradient theorem (Sutton et al., 1999) that
∇FR(π) = Ex∼ρπ,u∼π [∇F log π(u | x)Qπ(x, u)]
where ρπ is the state distribution induced by π and Qπ is the long-term expected reward starting
from state x under action u. By Lemma 2, we have |Qπ(x, u) − ˆQπ(x, u)| ≤O(ϵm + ϵπ) where
ˆQπ is the expected return under the learned environment model. Then because log π(u | x) is
the same regardless of whether we use the environment model or the true environment, we have
∇F log π(u | x) = ˆ∇F log π(u | x) and
ˆ∇FR(π) − ∇FR(π)
=
E
h
ˆ∇F log π(u | x) ˆQπ(x, u)
i
− E[∇F log π(u | x)Qπ(x, u)]
=
E
h
∇F log π(u | x) ˆQπ(x, u)
i
− E[∇F log π(u | x)Qπ(x, u)]
=
E
h
∇F log π(u | x) ˆQπ(x, u) − ∇F log π(u | x)Qπ(x, u)
i
=
E
h
∇F log π(u | x)
ˆQπ(x, u) − Qπ(x, u)
i
Now because we assume π(u | x) > δwhenever π(u | x) > 0, the gradient of the log is bounded
above by a constant. Therefore,
ˆ∇FR(π) − ∇HR(π)
= O(ϵm + ϵπ).
Theorem 2. (SPICE converges to an optimal safe policy.) Let π(i)
S for 1 ≤ i ≤ T be a sequence of
safe policies learned by SPICE (i.e., π(i)
S = λx.WPSHIELD (M, x, π(x))) and let π∗
S be the optimal
safe policy. Let β and σ2 be the bias and variance in the gradient estimate which is incurred due to
sampling. Then setting the learning rate η =
q
1
σ2
1
T + ϵ
, we have the expected regret bound:
R (π∗
S) − E
"
1
T
TX
i=1
R
π(i)
S
#
= O
σ
r
1
T + ϵ + β + LRζ + ϵm + ϵπ
!
Proof. The total bias in gradient estimates is bounded by the sum of (i) the bias incurred by sam-
pling, (ii) the bias incurred by shield interference, and (iii) the bias incurred by using an environment
model rather than the true environment. Part (i) is bounded by assumption, part (ii) is bounded by
Lemma 3, and part (iii) is bounded by Lemma 5. Combining these results, we find that the total bias
in the gradient estimate is O(β + LRζ + ϵm + ϵπ). Plugging this bound into Lemma 4, we reach
the desired result.
14
Published as a conference paper at ICLR 2023
B O VERLAPPING POLYHEDRA
In Section 4.3, we claimed that in many environments, the safe region can be represented by over-
lapping polyhedra. In this section, we formalize the notion of “overlapping” in our context and
explain why many practical environments satisfy this property.
We say two polyhedra “overlap” if their intersection has positive volume. That is, polyhedra p1 and
p2 overlap if µ(p1 ∩ p2) > 0 where µ is the Lebesgue measure.
Often in practical continuous control environments, either this property is satisfied, or it is impossi-
ble to verify any safe trajectories at all. This because in continuous control, the system trajectory is
a path in the state space, and this path has to move between the different polyhedra defining the safe
space. To see how this necessitates our overlapping property, let’s take a look at a few possibilities
for how the path can pass from one polyhedron p1 to a second polyhedron p2. For simplicity, we’ll
assume the polyhedra are closed, but this argument can be extended straightforwardly to open or
partially open polyhedra.
• If the two polyhedra are disconnected, then the system is unable to transition between them
because the system trajectory must define a path in the safe region of the state space. Since
the two sets are disconnected, the path must pass through the unsafe states, and therefore
cannot be safe.
• Suppose the dimension of the state space is n and the intersection of the two polyhedra
is an n − 1 dimensional surface (for example, if the state space is 2D then the polyhedra
intersect in a line segment). In this case, we can add a new polyhedron to the set of safe
polyhedra in order to provide an overlap to both p1 and p2. Specifically, let X be the set
of vertices of p1 ∩ p2. Choose a point x1 in the interior of p1 and a point x2 in the interior
of p2. Now define p′ as the convex hull of X ∪ {x1, x2}. Note that p′ ⊆ p1 ∪ p2, so we
can add p′ to the set of safe polyhedra without changing the safe state space as a whole.
However, p′ overlaps with both p1 and p2, and therefore the modified environment has the
overlapping property.
• Otherwise, p1 ∩ p2 is a lower-dimensional surface. Then for every point x ∈ p1 ∩ p2 and
for every ϵ > 0 there exists an unsafe point x′ such that ∥x − x′∥ < ϵ. In order for the
system to transition from p1 to p2, it must pass through a point which is arbitrarily close to
unsafe. As a result, the system must be arbitrarily fragile — any perturbation can result in
unsafe behavior. Because real-world systems are subject to noise and/or modeling error, it
would be impossible to be sure the system would be safe in this case.
C F URTHER EXPERIMENTAL DATA
In this section we provide further details about our experimental setup and results.
Our experiments are taken from Anderson et al. (2020), and consist of 10 environments with continu-
ous state and action spaces. The mountain-car and pendulum benchmarks are continuous versions of
the corresponding classical control environments. The acc benchmark represents an adaptive cruise
control environment. The remaining benchmarks represent various situations arising from robotics.
See Anderson et al. (2020), Appendix C for a more complete description of each benchmark.
As mentioned in Section 6, our tool is built on top of MBPO (Janner et al., 2019) using
SAC (Haarnoja et al., 2018a) as the underlying learning algorithm. We gather real data for 10
episodes for each model update then collect data from 70 simulated episodes before updating the en-
vironment model again. We look five time steps into the future during safety analysis. Our SAC im-
plementation (adapted from Tandon (2018)) uses automatic entropy tuning as proposed in Haarnoja
et al. (2018b). To compare with CPO we use the original implementation from Achiam et al. (2017).
Each training process is cut off after 48 hours. We train each benchmark starting from nine distinct
seeds.
Because the code for Bharadhwaj et al. (2021) is not available, we use a modified version of our
code for comparison, which we label CSC-MBPO. Our implementation follows Algorithm 1 except
that WPSHIELD is replaced by an alternative shielding framework. This framework learns a neural
safety signal using conservative Q-learning and then resamples actions from the policy until a safe
action is chosen, as described in Bharadhwaj et al. (2021). We chose this implementation in order to
15
Published as a conference paper at ICLR 2023
(a) acc
(b) mountain-car
(c) noisy-road
(d) road
Figure 5: Cumulative safety violations over time.
(a) acc
(b) mountain-car
(c) noisy-road
(d) road
Figure 6: Training curves for SPICE and CPO.
give the fairest possible comparison between S PICE and the conservative safety critic approach, as
the only differences between the two tools in our experiments is the shielding approach. The code
for our tool includes our implementation of CSC-MBPO.
The safety curves for the remaining benchmarks are presented in Figure 5.
Training curves for the remaining benchmarks are presented in Figure 6.
C.1 E XPLORING THE SAFETY HORIZON
As mentioned in Section 6, S PICE relies on choosing a good horizon over which to compute the
weakest precondition. We will now explore this tradeoff in more detail. Safety curves for each
benchmark under several different choices of horizon are presented in Figure 7. The performance
curves for each benchmark are shown in Figure 8.
There are a few notable phenomena shown in these curves. As expected, in most cases using a safety
horizon of one does not give particularly good safety. This is expected because as the safety horizon
16
Published as a conference paper at ICLR 2023
(a) acc
(b) car-racing
(c) mountain-car
(d) noisy-road
(e) noisy-road-2d
(f) obstacle
(g) obstacle2
(h) pendulum
(i) road
(j) road-2d
Figure 7: Safety curves for SPICE using different safety horizons
17
Published as a conference paper at ICLR 2023
becomes very small, it is easy for the system to end up in a state where there are no safe actions.
The obstacle benchmark shows this trend very clearly: as the safety horizon increases, the number
of safety violations decreases.
On the other hand, several benchmarks (e.g., acc, mountain-car, and noisy-road-2d) show a more
interesting dynamic: very large safety horizons also lead to an increase in safety violations. This is a
little less intuitive because as we look farther into the future, we should be able to avoid more unsafe
behaviors. However, in reality there is an explanation for this phenomenon. The imprecision in the
environment model (both due to model learning and due to the call to APPROXIMATE ) accumulates
for each time step we need to look ahead. As a result, large safety horizons lead to a fairly imprecise
analysis. Not only does this interfere with exploration, but it can also lead to an infeasible constraint
set in the shield. (That is, ϕ in Algorithm 2 becomes unsatisfiable.) In this case, the projection in
Algorithm 2 is ill-defined, so S PICE relies on a simplistic backup controller. This controller is not
always able to guarantee safety, leading to an increase in safety violations as the horizon increases.
In practice, we find that a safety horizon of five provides a good amount of safety in most benchmarks
without interfering with training. Smaller or larger values can lead to more safety violations while
also reducing performance a little in some benchmarks. In general, tuning the safety horizon for
each benchmark can yield better results, but for the purposes of this evaluation we have chosen to
use the same horizon throughout.
C.2 Q UALITATIVE EVALUATION
Figure 9 shows trajectories sampled from each tool at various stages of training. Specifically, each
100 episodes during training, 100 trajectories were sampled. The plotted trajectories represent the
worst samples from this set of 100. The environment represents a robot moving in a 2D plane which
must reach the green shaded region while avoiding the red crosshatched region. (Notice that while
the two regions appear to move in the figure, the are actually static. The axes in each part of the
figure change in order to represent the entirety of the trajectories.) From this visualization, we can
see that SPICE is able to quickly find a policy which safely reaches the goal every time. By contrast,
CSC-MBPO requires much more training data to find a good policy and encounters more safety
violations along the way. CPO is also slower to converge and more unsafe than SPICE .
18
Published as a conference paper at ICLR 2023
(a) acc
(b) car-racing
(c) mountain-car
(d) noisy-road
(e) noisy-road-2d
(f) obstacle
(g) obstacle2
(h) pendulum
(i) road
(j) road-2d
Figure 8: Training curves for SPICE using different safety horizons
19
Published as a conference paper at ICLR 2023
Figure 9: Trajectories at various stages of training.
20
|
Greg Anderson, Swarat Chaudhuri, Isil Dillig
|
Accept: poster
| 2,023
|
{"id": "zzqBoIFOQ1", "original": "dC_9j4aLwcA", "cdate": 1663850184200, "pdate": 1675279800000, "odate": 1664468100000, "mdate": null, "tcdate": 1663850184200, "tmdate": 1767109375406, "ddate": null, "number": 3283, "content": {"title": "Guiding Safe Exploration with Weakest Preconditions", "authorids": ["~Greg_Anderson1", "~Swarat_Chaudhuri1", "~Isil_Dillig1"], "authors": ["Greg Anderson", "Swarat Chaudhuri", "Isil Dillig"], "keywords": ["reinforcement learning", "safe learning", "safe exploration"], "TL;DR": "We use an online, weakest-precondition-based approach to ensure safety during exploration without interfering with performance.", "abstract": "In reinforcement learning for safety-critical settings, it is often desirable for the agent to obey safety constraints at all points in time, including during training. We present a novel neurosymbolic approach called SPICE to solve this safe exploration problem. SPICE uses an online shielding layer based on symbolic weakest preconditions to achieve a more precise safety analysis than existing tools without unduly impacting the training process. We evaluate the approach on a suite of continuous control benchmarks and show that it can achieve comparable performance to existing safe learning techniques while incurring fewer safety violations. Additionally, we present theoretical results showing that SPICE converges to the optimal safe policy under reasonable assumptions.", "anonymous_url": "I certify that there is no URL (e.g., github page) that could be used to find authors\u2019 identity.", "no_acknowledgement_section": "I certify that there is no acknowledgement section in this submission for double blind review.", "code_of_ethics": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics", "submission_guidelines": "Yes", "resubmission": "", "student_author": "", "Please_choose_the_closest_area_that_your_submission_falls_into": "Reinforcement Learning (eg, decision and control, planning, hierarchical RL, robotics)", "paperhash": "anderson|guiding_safe_exploration_with_weakest_preconditions", "pdf": "/pdf/31fba2ce53b7314f1c3b3ec7719c818d82414a6d.pdf", "supplementary_material": "/attachment/ec3f34f04957b594c28e243d8aa5403d15fd76b5.zip", "_bibtex": "@inproceedings{\nanderson2023guiding,\ntitle={Guiding Safe Exploration with Weakest Preconditions},\nauthor={Greg Anderson and Swarat Chaudhuri and Isil Dillig},\nbooktitle={The Eleventh International Conference on Learning Representations },\nyear={2023},\nurl={https://openreview.net/forum?id=zzqBoIFOQ1}\n}", "venue": "ICLR 2023 poster", "venueid": "ICLR.cc/2023/Conference", "community_implementations": "[ 1 code implementation](https://www.catalyzex.com/paper/guiding-safe-exploration-with-weakest/code)"}, "forum": "zzqBoIFOQ1", "referent": null, "invitation": "ICLR.cc/2023/Conference/-/Blind_Submission", "replyto": null, "readers": ["everyone"], "nonreaders": [], "signatures": ["ICLR.cc/2023/Conference"], "writers": ["ICLR.cc/2023/Conference"]}
|
ICLR.cc/2023/Conference/Paper3283/Reviewer_HV3z
| null |
3
|
{"id": "aib0K4Qbbj", "original": null, "cdate": 1666735486524, "pdate": null, "odate": null, "mdate": null, "tcdate": 1666735486524, "tmdate": 1666735486524, "ddate": null, "number": 3, "content": {"confidence": "3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.", "summary_of_the_paper": "The paper proposes a method for safe reinforcement learning whereby it uses a\nlearnt environment to not only optimise policies but also to improve safe\nexploration. Concretely, this is realised by taking a linear approximation of\nthe environment, which in conjunction with the safety specification gives\nlinear constraints (called weakest preconditions in the paper) on the actions\nthat are safe to be performed.", "strength_and_weaknesses": "+ The approach reduces the number of safety violations during training when\ncompared with the state-of-the-art in the area.\n\n- Somewhat straightforward and incremental: straightforward in that it relies\n on linear approximations of the environment and incremental in that the\n contribution is limited to the derivation of safety predicates within a\n previously studied framework.", "clarity,_quality,_novelty_and_reproducibility": "The paper is overall well written and presented. The contribution includes some\nnovel aspects pertaining to the computation of the weakest preconditions. The\nsignificance of the overall method is adequately evaluated.", "summary_of_the_review": "The paper builds on previous work on safe reinforcement learning to derive a\nmethod which although makes strong, linearity assumptions on the environment is\nable to outperform the state-of-the-art in terms of the number of safety\nviolations during training.", "correctness": "4: All of the claims and statements are well-supported and correct.", "technical_novelty_and_significance": "3: The contributions are significant and somewhat new. Aspects of the contributions exist in prior work.", "empirical_novelty_and_significance": "2: The contributions are only marginally significant or novel.", "flag_for_ethics_review": ["NO."], "recommendation": "6: marginally above the acceptance threshold"}, "forum": "zzqBoIFOQ1", "referent": null, "invitation": "ICLR.cc/2023/Conference/Paper3283/-/Official_Review", "replyto": "zzqBoIFOQ1", "readers": ["everyone"], "nonreaders": [], "signatures": ["ICLR.cc/2023/Conference/Paper3283/Reviewer_HV3z"], "writers": ["ICLR.cc/2023/Conference", "ICLR.cc/2023/Conference/Paper3283/Reviewer_HV3z"]}
|
{
"criticism": 0,
"example": 0,
"importance_and_relevance": 0,
"materials_and_methods": 0,
"praise": 0,
"presentation_and_reporting": 0,
"results_and_discussion": 0,
"suggestion_and_solution": 0,
"total": 0
}
| -10
| -2.667231
| 2.667231
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
{
"criticism": 0,
"example": 0,
"importance_and_relevance": 0,
"materials_and_methods": 0,
"praise": 0,
"presentation_and_reporting": 0,
"results_and_discussion": 0,
"suggestion_and_solution": 0
}
| -10
|
iclr2023
|
openreview
| 0
| 0
| 0
| null |
|||
zzqBoIFOQ1
|
Guiding Safe Exploration with Weakest Preconditions
| "In reinforcement learning for safety-critical settings, it is often desirable for the agent to obey(...TRUNCATED)
| "Published as a conference paper at ICLR 2023\nGUIDING SAFE EXPLORATION WITH\nWEAKEST PRECONDITIONS\(...TRUNCATED)
|
Greg Anderson, Swarat Chaudhuri, Isil Dillig
|
Accept: poster
| 2,023
| "{\"id\": \"zzqBoIFOQ1\", \"original\": \"dC_9j4aLwcA\", \"cdate\": 1663850184200, \"pdate\": 167527(...TRUNCATED)
|
ICLR.cc/2023/Conference/Paper3283/Reviewer_aN3N
| null |
4
| "{\"id\": \"MXV1TACjo_\", \"original\": null, \"cdate\": 1666251648361, \"pdate\": null, \"odate\": (...TRUNCATED)
| {"criticism":0,"example":0,"importance_and_relevance":0,"materials_and_methods":0,"praise":0,"presen(...TRUNCATED)
| -10
| -2.667231
| 2.667231
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| {"criticism":0,"example":0,"importance_and_relevance":0,"materials_and_methods":0,"praise":0,"presen(...TRUNCATED)
| -10
|
iclr2023
|
openreview
| 0
| 0
| 0
| null |
|||
zzqBoIFOQ1
|
Guiding Safe Exploration with Weakest Preconditions
| "In reinforcement learning for safety-critical settings, it is often desirable for the agent to obey(...TRUNCATED)
| "Published as a conference paper at ICLR 2023\nGUIDING SAFE EXPLORATION WITH\nWEAKEST PRECONDITIONS\(...TRUNCATED)
|
Greg Anderson, Swarat Chaudhuri, Isil Dillig
|
Accept: poster
| 2,023
| "{\"id\": \"zzqBoIFOQ1\", \"original\": \"dC_9j4aLwcA\", \"cdate\": 1663850184200, \"pdate\": 167527(...TRUNCATED)
|
ICLR.cc/2023/Conference/Paper3283/Reviewer_PYp3
| null |
3
| "{\"id\": \"AQTBBXOc898\", \"original\": null, \"cdate\": 1666651351449, \"pdate\": null, \"odate\":(...TRUNCATED)
| {"criticism":0,"example":0,"importance_and_relevance":0,"materials_and_methods":0,"praise":0,"presen(...TRUNCATED)
| -10
| -2.667231
| 2.667231
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| {"criticism":0,"example":0,"importance_and_relevance":0,"materials_and_methods":0,"praise":0,"presen(...TRUNCATED)
| -10
|
iclr2023
|
openreview
| 0
| 0
| 0
| null |
|||
zzL_5WoI3I
|
An Adaptive Entropy-Regularization Framework for Multi-Agent Reinforcement Learning
| "In this paper, we propose an adaptive entropy-regularization framework (ADER) for multi-agent reinf(...TRUNCATED)
| "Under review as a conference paper at ICLR 2023\nAN ADAPTIVE ENTROPY -REGULARIZATION FRAME -\nWORK (...TRUNCATED)
|
Woojun Kim, Youngchul Sung
|
Reject
| 2,023
| "{\"id\": \"zzL_5WoI3I\", \"original\": \"EJIdt37FT3h\", \"cdate\": 1663849997042, \"pdate\": 167527(...TRUNCATED)
|
ICLR.cc/2023/Conference/Paper1739/Reviewer_mfnN
| null |
4
| "{\"id\": \"f8lmUCwMFi\", \"original\": null, \"cdate\": 1666264303145, \"pdate\": null, \"odate\": (...TRUNCATED)
| {"criticism":0,"example":0,"importance_and_relevance":0,"materials_and_methods":0,"praise":0,"presen(...TRUNCATED)
| -10
| -2.667231
| 2.667231
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| {"criticism":0,"example":0,"importance_and_relevance":0,"materials_and_methods":0,"praise":0,"presen(...TRUNCATED)
| -10
|
iclr2023
|
openreview
| 0
| 0
| 0
| null |
|||
zzL_5WoI3I
|
An Adaptive Entropy-Regularization Framework for Multi-Agent Reinforcement Learning
| "In this paper, we propose an adaptive entropy-regularization framework (ADER) for multi-agent reinf(...TRUNCATED)
| "Under review as a conference paper at ICLR 2023\nAN ADAPTIVE ENTROPY -REGULARIZATION FRAME -\nWORK (...TRUNCATED)
|
Woojun Kim, Youngchul Sung
|
Reject
| 2,023
| "{\"id\": \"zzL_5WoI3I\", \"original\": \"EJIdt37FT3h\", \"cdate\": 1663849997042, \"pdate\": 167527(...TRUNCATED)
|
ICLR.cc/2023/Conference/Paper1739/Reviewer_dyHK
| null |
4
| "{\"id\": \"OP22Ykrg2_\", \"original\": null, \"cdate\": 1666543387728, \"pdate\": null, \"odate\": (...TRUNCATED)
| {"criticism":0,"example":0,"importance_and_relevance":0,"materials_and_methods":0,"praise":0,"presen(...TRUNCATED)
| -10
| -2.667231
| 2.667231
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| {"criticism":0,"example":0,"importance_and_relevance":0,"materials_and_methods":0,"praise":0,"presen(...TRUNCATED)
| -10
|
iclr2023
|
openreview
| 0
| 0
| 0
| null |
|||
zzL_5WoI3I
|
An Adaptive Entropy-Regularization Framework for Multi-Agent Reinforcement Learning
| "In this paper, we propose an adaptive entropy-regularization framework (ADER) for multi-agent reinf(...TRUNCATED)
| "Under review as a conference paper at ICLR 2023\nAN ADAPTIVE ENTROPY -REGULARIZATION FRAME -\nWORK (...TRUNCATED)
|
Woojun Kim, Youngchul Sung
|
Reject
| 2,023
| "{\"id\": \"zzL_5WoI3I\", \"original\": \"EJIdt37FT3h\", \"cdate\": 1663849997042, \"pdate\": 167527(...TRUNCATED)
|
ICLR.cc/2023/Conference/Paper1739/Reviewer_5qy9
| null |
3
| "{\"id\": \"3EILcfDnSF\", \"original\": null, \"cdate\": 1666643166239, \"pdate\": null, \"odate\": (...TRUNCATED)
| {"criticism":0,"example":0,"importance_and_relevance":0,"materials_and_methods":0,"praise":0,"presen(...TRUNCATED)
| -10
| -2.667231
| 2.667231
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| {"criticism":0,"example":0,"importance_and_relevance":0,"materials_and_methods":0,"praise":0,"presen(...TRUNCATED)
| -10
|
iclr2023
|
openreview
| 0
| 0
| 0
| null |
|||
zyfEWkV6it
|
AutoSparse: Towards Automated Sparse Training
| "Sparse training is emerging as a promising avenue for reducing the computational cost of training n(...TRUNCATED)
| "Under review as a conference paper at ICLR 2023\nAUTO SPARSE : T OWARDS AUTOMATED SPARSE TRAIN -\nI(...TRUNCATED)
|
Abhisek Kundu, Naveen Mellempudi, Dharma Teja Vooturi, Bharat Kaul, Pradeep Dubey
|
Reject
| 2,023
| "{\"id\": \"zyfEWkV6it\", \"original\": \"xRjo6l1iSF0\", \"cdate\": 1663849959079, \"pdate\": 167527(...TRUNCATED)
|
ICLR.cc/2023/Conference/Paper1417/Reviewer_EMVQ
| null |
4
| "{\"id\": \"tLSsnSAJhF\", \"original\": null, \"cdate\": 1666595754176, \"pdate\": null, \"odate\": (...TRUNCATED)
| {"criticism":0,"example":0,"importance_and_relevance":0,"materials_and_methods":0,"praise":0,"presen(...TRUNCATED)
| -10
| -2.667231
| 2.667231
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| {"criticism":0,"example":0,"importance_and_relevance":0,"materials_and_methods":0,"praise":0,"presen(...TRUNCATED)
| -10
|
iclr2023
|
openreview
| 0
| 0
| 0
| null |
|||
zyfEWkV6it
|
AutoSparse: Towards Automated Sparse Training
| "Sparse training is emerging as a promising avenue for reducing the computational cost of training n(...TRUNCATED)
| "Under review as a conference paper at ICLR 2023\nAUTO SPARSE : T OWARDS AUTOMATED SPARSE TRAIN -\nI(...TRUNCATED)
|
Abhisek Kundu, Naveen Mellempudi, Dharma Teja Vooturi, Bharat Kaul, Pradeep Dubey
|
Reject
| 2,023
| "{\"id\": \"zyfEWkV6it\", \"original\": \"xRjo6l1iSF0\", \"cdate\": 1663849959079, \"pdate\": 167527(...TRUNCATED)
|
ICLR.cc/2023/Conference/Paper1417/Reviewer_djDX
| null |
5
| "{\"id\": \"c1Ef_XDl1bl\", \"original\": null, \"cdate\": 1666830817420, \"pdate\": null, \"odate\":(...TRUNCATED)
| {"criticism":0,"example":0,"importance_and_relevance":0,"materials_and_methods":0,"praise":0,"presen(...TRUNCATED)
| -10
| -2.667231
| 2.667231
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| {"criticism":0,"example":0,"importance_and_relevance":0,"materials_and_methods":0,"praise":0,"presen(...TRUNCATED)
| -10
|
iclr2023
|
openreview
| 0
| 0
| 0
| null |
|||
zyfEWkV6it
|
AutoSparse: Towards Automated Sparse Training
| "Sparse training is emerging as a promising avenue for reducing the computational cost of training n(...TRUNCATED)
| "Under review as a conference paper at ICLR 2023\nAUTO SPARSE : T OWARDS AUTOMATED SPARSE TRAIN -\nI(...TRUNCATED)
|
Abhisek Kundu, Naveen Mellempudi, Dharma Teja Vooturi, Bharat Kaul, Pradeep Dubey
|
Reject
| 2,023
| "{\"id\": \"zyfEWkV6it\", \"original\": \"xRjo6l1iSF0\", \"cdate\": 1663849959079, \"pdate\": 167527(...TRUNCATED)
|
ICLR.cc/2023/Conference/Paper1417/Reviewer_Xqxh
| null |
2
| "{\"id\": \"apmjMXvpru\", \"original\": null, \"cdate\": 1667337552704, \"pdate\": null, \"odate\": (...TRUNCATED)
| {"criticism":0,"example":0,"importance_and_relevance":0,"materials_and_methods":0,"praise":0,"presen(...TRUNCATED)
| -10
| -2.667231
| 2.667231
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| {"criticism":0,"example":0,"importance_and_relevance":0,"materials_and_methods":0,"praise":0,"presen(...TRUNCATED)
| -10
|
iclr2023
|
openreview
| 0
| 0
| 0
| null |
End of preview. Expand
in Data Studio
No dataset card yet
- Downloads last month
- 8