* Add support for streaming delimited messages This allows developers to easily dump and load multiple messages from a stream in a way that is compatible with official protobuf implementations (such as Java's `MessageLite#writeDelimitedTo(...)`). * Add Java compatibility tests for streaming These tests stream data such as messages to output files, have a Java binary read them and then write them back using the `protobuf-java` functions, and then read them back in on the Python side to check that the returned data is as expected. This checks that the official Java implementation (and so any other matching implementations) can properly parse outputs from Betterproto, and vice-versa, ensuring compatibility in these functions between the two. * Replace `xxxxableBuffer` with `SupportsXxxx`
Standard Tests Development Guide
Standard test cases are found in betterproto/tests/inputs, where each subdirectory represents a testcase, that is verified in isolation.
inputs/
   bool/
   double/
   int32/
   ...
Test case directory structure
Each testcase has a <name>.proto file with a message called Test, and optionally a matching .json file and a custom test called test_*.py.
bool/
  bool.proto
  bool.json     # optional
  test_bool.py  # optional
proto
<name>.proto — The protobuf message to test
syntax = "proto3";
message Test {
    bool value = 1;
}
You can add multiple .proto files to the test case, as long as one file matches the directory name.
json
<name>.json — Test-data to validate the message with
{
  "value": true
}
pytest
test_<name>.py — Custom test to validate specific aspects of the generated class
from tests.output_betterproto.bool.bool import Test
def test_value():
    message = Test()
    assert not message.value, "Boolean is False by default"
Standard tests
The following tests are automatically executed for all cases:
- Can the generated python code be imported?
- Can the generated message class be instantiated?
- Is the generated code compatible with the Google's grpc_tools.protocimplementation?- when .jsonis present
 
- when 
Running the tests
- pipenv run generate
 This generates:- betterproto/tests/output_betterproto— the plugin generated python classes
- betterproto/tests/output_reference— reference implementation classes
 
- pipenv run test
Intentionally Failing tests
The standard test suite includes tests that fail by intention. These tests document known bugs and missing features that are intended to be corrected in the future.
When running pytest, they show up as x or  X in the test results.
betterproto/tests/test_inputs.py ..x...x..x...x.X........xx........x.....x.......x.xx....x...................... [ 84%]
- .— PASSED
- x— XFAIL: expected failure
- X— XPASS: expected failure, but still passed
Test cases marked for expected failure are declared in inputs/config.py