Find out if your data is what you think it is

Pointblank is a table validation and testing library for Python. It helps you ensure that your tabular data meets certain expectations and constraints and it can present the results in a beautiful and useful tabular reporting framework.

Getting Started

Let’s take a Polars DataFrame and validate it against a set of constraints. We do that using the Validate class and its collection of validation methods:

import pointblank as pb

validation = (
    pb.Validate(data=pb.load_dataset(dataset="small_table")) # Use Validate() to start
    .col_vals_gt(columns="d", value=100)       # STEP 1 |
    .col_vals_le(columns="c", value=5)         # STEP 2 | <-- Building a validation plan
    .col_exists(columns=["date", "date_time"]) # STEP 3 |
    .interrogate() # This will execute all validation steps and collect intel
)

validation

The rows in the validation table correspond to each of the validation steps. One of the key concepts is that validation steps can be broken down into atomic test cases (test units) and each of these test units is given either of pass/fail status based on the validation constraints. You’ll see these tallied up in the reporting table (in the UNITS, PASS, and FAIL columns).

Tabular reporting is just one way to see the results. You can also obtain fine-grained results of the interrogation as JSON output or through methods that get key metrics. You can also utilize the validation results to perform filtering of the input table based on row-level pass/fail status (via the get_sundered_data() method).

On the input side, we can use the following types of tables:

  • Polars DataFrame
  • Pandas DataFrame
  • DuckDB table
  • MySQL table
  • PostgreSQL table
  • SQLite table
  • Parquet

To make this all work seamlessly, we use Narwhals to work with Polars and Pandas DataFrames. We also integrate with Ibis to enable the use of DuckDB, MySQL, PostgreSQL, SQLite, and Parquet. In doing all of this, we can provide an ergonomic and consistent API for validating tabular data from various sources.

Features

Here’s a short list of what we think makes pointblank a great tool for data validation:

  • Declarative Syntax: Define your data validation rules using a declarative syntax
  • Flexible: We support tables from Polars, Pandas, Duckdb, MySQL, PostgreSQL, SQLite, and Parquet
  • Beautiful Reports: Generate beautiful HTML reports of your data validation results
  • Functional Output: Get JSON output of your data validation results for further processing
  • Data Testing: Write tests for your data and use them in your notebooks or testing framework
  • Easy to Use: Get started quickly with a simple API and clear documentation

Installation

You can install pointblank using pip:

pip install pointblank

If you encounter a bug, have usage questions, or want to share ideas to make this package better, please feel free to file an issue.

Code of Conduct

Please note that the pointblank project is released with a contributor code of conduct.
By participating in this project you agree to abide by its terms.

Contributing to pointblank

There are many ways to contribute to the ongoing development of pointblank. Some contributions can be simple (like fixing typos, improving documentation, filing issues for feature requests or problems, etc.) and others might take more time and care (like answering questions and submitting PRs with code changes). Just know that anything you can do to help would be very much appreciated!

Please read over the contributing guidelines for information on how to get started.

📄 License

Pointblank is licensed under the MIT license.

🏛️ Governance

This project is primarily maintained by Rich Iannone. Other authors may occasionally assist with some of these duties.