* Update protobuf pregenerated files * Update grpcio-tools to latest version * Implement proto3 field presence * Fix to_dict with None optional fields. * Add test with optional enum * Properly support optional enums * Add tests for 64-bit ints and floats * Support field presence for int64 types * Fix oneof serialization with proto3 field presence (#292) = Description The serialization of a oneof message that contains a message with fields with explicit presence was buggy. For example: ``` message A { oneof kind { B b = 1; C c = 2; } } message B {} message C { optional bool z = 1; } ``` Serializing `A(b=B())` would lead to this payload: ``` 0A # tag1, length delimited 00 # length: 0 12 # tag2, length delimited 00 # length: 0 ``` Which when deserialized, leads to the message `A(c=C())`. = Explanation The issue lies in the post_init method. All fields are introspected, and if different from PLACEHOLDER, the message is marked as having been "serialized_on_wire". Then, when serializing `A(b=B())`, we go through each field of the oneof: - field 'b': this is the selected field from the group, so it is serialized - field 'c': marked as 'serialized_on_wire', so it is added as well. = Fix The issue is that support for explicit presence changed the default value from PLACEHOLDER to None. This breaks the post_init method in that case, which is relatively easy to fix: if a field is optional, and set to None, this is considered as the default value (which it is). This fix however has a side-effect: the group_current for this field (the oneof trick for explicit presence) is no longer set. This changes the behavior when serializing the message in JSON: as the value is the default one (None), and the group is not set (which would force the serialization of the field), so None fields are no longer serialized in JSON. This break one test, and will be fixed in the next commit. * fix: do not serialize None fields in JSON format This is linked to the fix from the previous commit: after it, scalar None fields were not included in the JSON format, but some were still included. This is all cleaned up: None fields are not added in JSON by default, as they indicate the default value of fields with explicit presence. However, if `include_default_values is set, they are included. * Fix: use builtin annotation prefix * Remove comment Co-authored-by: roblabla <unfiltered@roblab.la> Co-authored-by: Vincent Thiberville <vthib@pm.me>
Standard Tests Development Guide
Standard test cases are found in betterproto/tests/inputs, where each subdirectory represents a testcase, that is verified in isolation.
inputs/
bool/
double/
int32/
...
Test case directory structure
Each testcase has a <name>.proto
file with a message called Test
, and optionally a matching .json
file and a custom test called test_*.py
.
bool/
bool.proto
bool.json # optional
test_bool.py # optional
proto
<name>.proto
— The protobuf message to test
syntax = "proto3";
message Test {
bool value = 1;
}
You can add multiple .proto
files to the test case, as long as one file matches the directory name.
json
<name>.json
— Test-data to validate the message with
{
"value": true
}
pytest
test_<name>.py
— Custom test to validate specific aspects of the generated class
from tests.output_betterproto.bool.bool import Test
def test_value():
message = Test()
assert not message.value, "Boolean is False by default"
Standard tests
The following tests are automatically executed for all cases:
- Can the generated python code be imported?
- Can the generated message class be instantiated?
- Is the generated code compatible with the Google's
grpc_tools.protoc
implementation?- when
.json
is present
- when
Running the tests
pipenv run generate
This generates:betterproto/tests/output_betterproto
— the plugin generated python classesbetterproto/tests/output_reference
— reference implementation classes
pipenv run test
Intentionally Failing tests
The standard test suite includes tests that fail by intention. These tests document known bugs and missing features that are intended to be corrected in the future.
When running pytest
, they show up as x
or X
in the test results.
betterproto/tests/test_inputs.py ..x...x..x...x.X........xx........x.....x.......x.xx....x...................... [ 84%]
.
— PASSEDx
— XFAIL: expected failureX
— XPASS: expected failure, but still passed
Test cases marked for expected failure are declared in inputs/config.py