Conversation
|
Invitation URL: |
Test Results 72 files 493 suites 0s ⏱️ Results for commit 8da3bdc. ♻️ This comment has been updated with latest results. |
|
@tpmanley |
|
Minimum allowed coverage is Generated by 🐒 cobertura-action against 8da3bdc |
First of all, I corrected all the errors. |
|
@ctowns |
|
@HunsupJung when you're setting up the test, you can make a mock device with the following profile Then run the test as normal. This will make the tested profile include the optional capability. You can add as many optional capabilities as you want in this way. |
drivers/SmartThings/matter-lock/src/test/test_new_matter_lock_battery.lua
Outdated
Show resolved
Hide resolved
drivers/SmartThings/matter-lock/src/test/test_new_matter_lock_battery.lua
Outdated
Show resolved
Hide resolved
drivers/SmartThings/matter-lock/src/test/test_new_matter_lock_battery.lua
Outdated
Show resolved
Hide resolved
a90c386 to
b331d2d
Compare
|
@tpmanley |
|
Thanks for resolving my two comments. There are a couple of conflicts right now so what I'd recommend is squash and rebase your changes into a single commit on top of latest |
Signed-off-by: Hunsup Jung <hunsup.jung@samsung.com>
199cfbd to
98a69db
Compare
|
@tpmanley |
| BATTERY_SUPPORT = "__BATTERY_SUPPORT", | ||
| } | ||
|
|
||
| local DoorLockFeatureMapAttr = {ID = 0xFFFC, cluster = DoorLock.ID} |
There was a problem hiding this comment.
@gharveymn it doesn't seem like we have any way to represent this naturally in the generated clusters. Do you have any context for why? And do you have any recomendations on how to integrate it in the driver "naturally"?
I'm thinking we may want to handle it as:
| local DoorLockFeatureMapAttr = {ID = 0xFFFC, cluster = DoorLock.ID} | |
| local DoorLock.attributes.FeatureMap = { | |
| ID = 0xFFFC, | |
| NAME = "FeatureMap", | |
| _cluster = require "st.matter.clusters.DoorLock", | |
| base_type = require "st.matter.clusters.DoorLock.types.Feature", | |
| } |
I'm mainly thinking of 1. whether/how we'd subscribe to this, and 2. whether we can make unit tests for it.
There was a problem hiding this comment.
Not really too much context. I wasn't involved in the PRs which did the filtering. We also filter two other attributes: GeneratedCommandList and ClusterRevision, but those make sense since we don't act as a server in Lua-land.
I don't really see a reason not to add cluster-specific attribute definitions in scripting-engine for each of these where the base type is types.Feature. You may run into an issue where the Feature type doesn't exist for certain clusters, but you can just work around that by making the base type U32 or something.
The way you proposed would be a start, but I think you would be missing certain functions which would be a blocker for (1) and (2). Perhaps you can make some adjustments to the ZAP generation for scripting-engine locally, and then export the attribute definition as an embedded cluster attribute for now.
There was a problem hiding this comment.
@HunsupJung to integrate this more cleanly, I think we should add the following file DoorLock/server/attributes/FeatureMap.lua for the embedded cluster definition:
local cluster_base = require "st.matter.cluster_base"
local data_types = require "st.matter.data_types"
local TLVParser = require "st.matter.TLV.TLVParser"
local FeatureMap = {
ID = 0xFFFC,
NAME = "FeatureMap",
_cluster = require "st.matter.clusters.DoorLock",
base_type = require "st.matter.clusters.DoorLock.types.Feature",
}
function FeatureMap:augment_type(data_type_obj)
for i, v in ipairs(data_type_obj.elements) do
data_type_obj.elements[i] = data_types.validate_or_build_type(v, FeatureMap.element_type)
end
end
function FeatureMap:new_value(...)
local o = self.base_type(table.unpack({...}))
return o
end
function FeatureMap:read(device, endpoint_id)
return cluster_base.read(
device,
endpoint_id,
self._cluster.ID,
self.ID,
nil
)
end
function FeatureMap:subscribe(device, endpoint_id)
return cluster_base.subscribe(
device,
endpoint_id,
self._cluster.ID,
self.ID,
nil
)
end
function FeatureMap:set_parent_cluster(cluster)
self._cluster = cluster
return self
end
function FeatureMap:build_test_report_data(
device,
endpoint_id,
value,
status
)
local data = data_types.validate_or_build_type(value, self.base_type)
return cluster_base.build_test_report_data(
device,
endpoint_id,
self._cluster.ID,
self.ID,
data,
status
)
end
function FeatureMap:deserialize(tlv_buf)
local data = TLVParser.decode_tlv(tlv_buf)
return data
end
setmetatable(FeatureMap, {__call = FeatureMap.new_value, __index = FeatureMap.base_type})
return FeatureMap
Then we can maybe, at the top of the file, update the version check to something like:
if version.api < 10 then
clusters.DoorLock = require "DoorLock"
else
clusters.DoorLock.attributes.FeatureMap = require "DoorLock.server.attributes.FeatureMap"
end
Besides making this "cleaner", I think we should create unit tests for this newly introduced logic, and we will need this to create proper unit tests.
Signed-off-by: Hunsup Jung <hunsup.jung@samsung.com>
f6dac7a to
b983f96
Compare
| if version.api >= 15 and version.rpc >= 9 then | ||
| match_profile_modular(driver, device) | ||
| end |
There was a problem hiding this comment.
This will not work currently, since it will no longer check for profiling data (that function is done at the start of match_profile). We should either add a parameter to match_profile called modular_only or something, or just move that profiling data function into the sub match profile functions.
There was a problem hiding this comment.
It does not need to read PowerSource cluster again since it is only related to the DoorLock cluster. (I mean that match_profile can be used here.)
If it needs to check DPS feature, I will add it in DPS PR.
There was a problem hiding this comment.
I still think we need this profiling_data check as a safety precaution.
Since we will subscribe to this attribute and the power source attribute list attribute when a device is first onboarded, it is possible that this handler will run before the power source attribute list handler. We cannot guarantee that ordering. This may cause 2 profile updates in a short period, which causes significant latency, so we should avoid this.
There was a problem hiding this comment.
It makes sense. I will consider this part and modify.
| if ep_cluster.feature_map == ib.data.value then | ||
| return | ||
| end | ||
| ep_cluster.feature_map = ib.data.value |
There was a problem hiding this comment.
This line will only affect the in-memory version of the st_store. This means that every time the cloud syncs with the driver, this will be overwritten with the previous data that was added on device creation.
I do not think that there is any built-in way to update this info in the cloud today, so this is an error-prone solution. I suggest that we save a separate field called
local latestDoorLockFeatureMap = "__latest_door_lock_feature_map"
or something like that.
Then we can do this:
| ep_cluster.feature_map = ib.data.value | |
| set_field_for_endpoint(device, latestDoorLockFeatureMap, device_ep.endpoint_id, ib.data.value, { persist = true }) |
where we take these api from Matter Switch: set_field_for_endpoint and get_field_for_endpoint.
Then, in match_profile_modular we should check whether this table has been initialized, and if it has, replace the in-memory version like you do here. Something like:
local clus_has_feature = function(feature_bitmap)
return DoorLock.are_features_supported(feature_bitmap, get_field_for_endpoint(device, latestDoorLockFeatureMap, device_ep.endpoint_id) or ep_cluster.feature_map)
end
in match_profile_modular.
Signed-off-by: Hunsup Jung <hunsup.jung@samsung.com>
6b496a8 to
8da3bdc
Compare
| local clus_has_feature = function(feature_bitmap) | ||
| return DoorLock.are_features_supported(feature_bitmap, ep_cluster.feature_map) | ||
| end |
There was a problem hiding this comment.
To support the changes discussed here, I think we have to modify this.
| local clus_has_feature = function(feature_bitmap) | |
| return DoorLock.are_features_supported(feature_bitmap, lock_utils.get_field_for_endpoint(device, latestDoorLockFeatureMap, device_ep.endpoint_id) or ep_cluster.feature_map) | |
| end |
| [capabilities.lock.ID] = { | ||
| DoorLock.attributes.LockState | ||
| DoorLock.attributes.LockState, | ||
| DoorLockFeatureMapAttr |
There was a problem hiding this comment.
Since this is not explicitly related to the lock capability, I think we should just add the line in device_init:
device:add_subscribed_attribute(DoorLockFeatureMapAttr)
Type of Change
Checklist
Description of Change
Summary of Completed Tests