Do column data agree with a predicate expression?Source:
col_vals_expr() validation function, the
expectation function, and the
test_col_vals_expr() test function all check
whether column values in a table agree with a user-defined predicate
expression. The validation function can be used directly on a data table or
with an agent object (technically, a
ptblank_agent object) whereas the
expectation and test functions can only be used with a data table. Each
validation step or expectation will operate over the number of test units
that is equal to the number of rows in the table (after any
have been applied).
col_vals_expr( x, expr, preconditions = NULL, segments = NULL, actions = NULL, step_id = NULL, label = NULL, brief = NULL, active = TRUE ) expect_col_vals_expr(object, expr, preconditions = NULL, threshold = 1) test_col_vals_expr(object, expr, preconditions = NULL, threshold = 1)
A data frame, tibble (
tbl_dbi), Spark DataFrame (
tbl_spark), or, an agent object of class
ptblank_agentthat is created with
An expression to use for this validation. This can either be in the form of a call made with the
expr()function or as a one-sided R formula (using a leading
An optional expression for mutating the input table before proceeding with the validation. This can either be provided as a one-sided R formula using a leading
~ . %>% dplyr::mutate(col = col + 10)or as a function (e.g.,
function(x) dplyr::mutate(x, col = col + 10). See the Preconditions section for more information.
An optional expression or set of expressions (held in a list) that serve to segment the target table by column values. Each expression can be given in one of two ways: (1) as column names, or (2) as a two-sided formula where the LHS holds a column name and the RHS contains the column values to segment on. See the Segments section for more details on this.
A list containing threshold levels so that the validation step can react accordingly when exceeding the set levels. This is to be created with the
One or more optional identifiers for the single or multiple validation steps generated from calling a validation function. The use of step IDs serves to distinguish validation steps from each other and provide an opportunity for supplying a more meaningful label compared to the step index. By default this is
NULL, and pointblank will automatically generate the step ID value (based on the step index) in this case. One or more values can be provided, and the exact number of ID values should (1) match the number of validation steps that the validation function call will produce (influenced by the number of
columnsprovided), (2) be an ID string not used in any previous validation step, and (3) be a vector with unique values.
An optional label for the validation step. This label appears in the agent report and for the best appearance it should be kept short.
An optional, text-based description for the validation step. If nothing is provided here then an autobrief is generated by the agent, using the language provided in
langargument (which defaults to
"en"or English). The autobrief incorporates details of the validation step so it's often the preferred option in most cases (where a
labelmight be better suited to succinctly describe the validation).
A logical value indicating whether the validation step should be active. If the validation function is working with an agent,
FALSEwill make the validation step inactive (still reporting its presence and keeping indexes for the steps unchanged). If the validation function will be operating directly on data (no agent involvement), then any step with
active = FALSEwill simply pass the data through with no validation whatsoever. Aside from a logical vector, a one-sided R formula using a leading
~can be used with
.(serving as the input data table) to evaluate to a single logical value. With this approach, the pointblank function
has_columns()can be used to determine whether to make a validation step active on the basis of one or more columns existing in the table (e.g.,
~ . %>% has_columns(vars(d, e))). The default for
A data frame, tibble (
tbl_dbi), or Spark DataFrame (
tbl_spark) that serves as the target table for the expectation function or the test function.
A simple failure threshold value for use with the expectation (
expect_) and the test (
test_) function variants. By default, this is set to
1meaning that any single unit of failure in data validation results in an overall test failure. Whole numbers beyond
1indicate that any failing units up to that absolute threshold value will result in a succeeding testthat test or evaluate to
TRUE. Likewise, fractional values (between
1) act as a proportional failure threshold, where
0.15means that 15 percent of failing test units results in an overall test failure.
For the validation function, the return value is either a
ptblank_agent object or a table object (depending on whether an agent
object or a table was passed to
x). The expectation function invisibly
returns its input but, in the context of testing data, the function is
called primarily for its potential side-effects (e.g., signaling failure).
The test function returns a logical value.
The types of data tables that are officially supported are:
data frames (
data.frame) and tibbles (
Spark DataFrames (
the following database tables (
Other database tables may work to varying degrees but they haven't been formally tested (so be mindful of this when using unsupported backends with pointblank).
Providing expressions as
preconditions means pointblank will preprocess
the target table during interrogation as a preparatory step. It might happen
that a particular validation requires a calculated column, some filtering of
rows, or the addition of columns via a join, etc. Especially for an
agent-based report this can be advantageous since we can develop a large
validation plan with a single target table and make minor adjustments to it,
as needed, along the way.
The table mutation is totally isolated in scope to the validation step(s)
preconditions is used. Using dplyr code is suggested here since
the statements can be translated to SQL if necessary (i.e., if the target
table resides in a database). The code is most easily supplied as a one-sided
R formula (using a leading
~). In the formula representation, the
serves as the input data table to be transformed (e.g.,
~ . %>% dplyr::mutate(col_b = col_a + 10)). Alternatively, a function could instead
be supplied (e.g.,
function(x) dplyr::mutate(x, col_b = col_a + 10)).
By using the
segments argument, it's possible to define a particular
validation with segments (or row slices) of the target table. An optional
expression or set of expressions that serve to segment the target table by
column values. Each expression can be given in one of two ways: (1) as column
names, or (2) as a two-sided formula where the LHS holds a column name and
the RHS contains the column values to segment on.
As an example of the first type of expression that can be used,
vars(a_column) will segment the target table in however many unique values
are present in the column called
a_column. This is great if every unique
value in a particular column (like different locations, or different dates)
requires it's own repeating validation.
With a formula, we can be more selective with which column values should be
used for segmentation. Using
a_column ~ c("group_1", "group_2") will
attempt to obtain two segments where one is a slice of data where the value
"group_1" exists in the column named
"a_column", and, the other is a
"group_2" exists in the same column. Each group of rows
resolved from the formula will result in a separate validation step.
If there are multiple
columns specified then the potential number of
validation steps will be
m columns multiplied by
n segments resolved.
Segmentation will always occur after
preconditions (i.e., statements that
mutate the target table), if any, are applied. With this type of one-two
combo, it's possible to generate labels for segmentation using an expression
preconditions and refer to those labels in
segments without having to
generate a separate version of the target table.
Often, we will want to specify
actions for the validation. This argument,
present in every validation function, takes a specially-crafted list
object that is best produced by the
action_levels() function. Read that
function's documentation for the lowdown on how to create reactions to
above-threshold failure levels in validation. The basic gist is that you'll
want at least a single threshold level (specified as either the fraction of
test units failed, or, an absolute value), often using the
argument. This is especially true when
x is a table object because,
otherwise, nothing happens. For the
col_vals_*()-type functions, using
action_levels(warn_at = 0.25) or
action_levels(stop_at = 0.25) are good
choices depending on the situation (the first produces a warning when a
quarter of the total test units fails, the other
stop()s at the same
Want to describe this validation step in some detail? Keep in mind that this
is only useful if
x is an agent. If that's the case,
brief the agent
with some text that fits. Don't worry if you don't want to do it. The
autobrief protocol is kicked in when
brief = NULL and a simple brief will
then be automatically generated.
A pointblank agent can be written to YAML with
yaml_write() and the
resulting YAML can be used to regenerate an agent (with
or interrogate the target table (via
col_vals_expr() is represented in YAML (under the top-level
steps key as
a list member), the syntax closely follows the signature of the validation
function. Here is an example of how a complex call of
col_vals_expr() as a
validation step is expressed in R code and in the corresponding YAML
agent %>% col_vals_expr( expr = ~ a %% 1 == 0, preconditions = ~ . %>% dplyr::filter(a < 10), segments = b ~ c("group_1", "group_2"), actions = action_levels(warn_at = 0.1, stop_at = 0.2), label = "The `col_vals_expr()` step.", active = FALSE )
steps: - col_vals_expr: expr: ~a%%1 == 0 preconditions: ~. %>% dplyr::filter(a < 10) segments: b ~ c("group_1", "group_2") actions: warn_fraction: 0.1 stop_fraction: 0.2 label: The `col_vals_expr()` step. active: false
In practice, both of these will often be shorter as only the
requires a value. Arguments with default values won't be written to YAML when
yaml_write() (though it is acceptable to include them with their
default when generating the YAML by other means). It is also possible to
preview the transformation of an agent to YAML without any writing to disk by
For all of the examples here, we'll use a simple table with three numeric
c) and three character columns (
## # A tibble: 6 × 3 ## a b c ## <dbl> <dbl> <dbl> ## 1 1 0 0.5 ## 2 2 0 0.3 ## 3 1 0 0.8 ## 4 7 1 1.4 ## 5 8 1 1.9 ## 6 6 1 1.2
A: Using an
agent with validation functions and then
Validate that values in column
a are integer-like by using the R modulo
operator and expecting
0. We'll determine if this validation has any
failing test units (there are 6 test units, one for each row).
agent in the console shows the validation report in the
Viewer. Here is an excerpt of validation report, showing the single entry
that corresponds to the validation step demonstrated here.
This way of using validation functions acts as a data filter. Data is passed
through but should
stop() if there is a single test unit failing. The
behavior of side effects can be customized with the
##  1 2 1 7 8 6
expect_*() form, we would typically perform one validation at a
time. This is primarily used in testthat tests.
test_*() form, we should get a single logical value returned to
##  TRUE
tbl %>% test_col_vals_expr(expr = ~ case_when( b == 0 ~ a %>% between(0, 5) & c < 1, b == 1 ~ a > 5 & c >= 1 ))
##  TRUE
If you only want to test a subset of rows, then the
doesn't need to be exhaustive. Any rows that don't fall into the cases will
be pruned (giving us less test units overall).
##  TRUE
Other validation functions: