Add chat integration tests to match stream-chat-python test parity#219
Conversation
📝 WalkthroughWalkthroughLarge v2→v3 migration: request/response payloads standardized to generated Request/.to_dict() and Response types; many new endpoints and optional parameters across Chat, Common, Feeds, Moderation, Video; new webhook module; expanded tests and CI/release workflow tweaks. Several method signatures extended or added. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
|
Add comprehensive test coverage for chat functionality matching the old stream-chat-python SDK. Includes tests for channels, messages, moderation, users, misc operations, reminders/locations, and team usage stats. Also updates codegen for team usage stats endpoint and undelete message fix. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
f0d2920 to
d635d5a
Compare
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- test_add_moderators: check is_moderator is not True (API returns None, not False) - test_mute_user/test_mute_with_timeout: use mutes[0] not mute (MuteResponse has mutes list) - test_create_reminder: response is ReminderResponseData directly, not wrapped - test_update_reminder: use response.data.reminder (UpdateReminderResponse wraps it) - skip test_delete_message_for_me: delete_for_me needs body param not query param - skip test_query_message_flags: V2 moderation.flag() doesn't populate chat-level flags Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Non-video tests need their own base URL (chat.stream-io-api.com) separate from the video base URL. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
♻️ Duplicate comments (4)
tests/test_chat_user.py (1)
167-188:⚠️ Potential issue | 🟠 MajorUse task polling instead of a fixed sleep in restore flow.
delete_usersis async (task-based);time.sleep(2)makes this test timing-sensitive and flaky under load.Proposed stabilization
def test_restore_users(client: Stream): """Delete a user and then restore them.""" user_id = str(uuid.uuid4()) client.update_users(users={user_id: UserRequest(id=user_id, name=user_id)}) - client.delete_users(user_ids=[user_id]) - - # Wait for delete task - import time - - time.sleep(2) + from tests.base import wait_for_task + + delete_response = client.delete_users(user_ids=[user_id]) + wait_for_task(client, delete_response.data.task_id, timeout_ms=30000) client.restore_users(user_ids=[user_id])🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_chat_user.py` around lines 167 - 188, The test uses a fixed time.sleep(2) after calling client.delete_users in test_restore_users, which makes the test flaky; replace the fixed sleep with a polling loop that waits for the delete task to complete by repeatedly calling client.query_users(QueryUsersPayload(filter_conditions={"id": user_id})) (or, if available, a task-status API on the client) until the user is absent or a short timeout (e.g., 10s) is reached; then proceed to call client.restore_users and assert restoration as before. Ensure the loop sleeps briefly between polls (e.g., 0.2s) and fails the test if the timeout is exceeded.tests/test_chat_misc.py (2)
80-107:⚠️ Potential issue | 🟠 MajorRestore
teamchannel type after mutation to prevent cross-test pollution.This test mutates shared channel-type commands and doesn't restore prior state, which can break later tests that assume default config.
Safer pattern with rollback
def test_update_channel_type(client: Stream): """Update a channel type's configuration.""" # Get current config to know the required fields current = client.chat.get_channel_type(name="team") + original_commands = list(current.data.commands or []) - response = client.chat.update_channel_type( - name="team", - automod=current.data.automod, - automod_behavior=current.data.automod_behavior, - max_message_length=current.data.max_message_length, - commands=["ban", "unban"], - ) - assert response.data.commands is not None - assert "ban" in response.data.commands - assert "unban" in response.data.commands + try: + response = client.chat.update_channel_type( + name="team", + automod=current.data.automod, + automod_behavior=current.data.automod_behavior, + max_message_length=current.data.max_message_length, + commands=["ban", "unban"], + ) + assert response.data.commands is not None + assert "ban" in response.data.commands + assert "unban" in response.data.commands + finally: + client.chat.update_channel_type( + name="team", + automod=current.data.automod, + automod_behavior=current.data.automod_behavior, + max_message_length=current.data.max_message_length, + commands=original_commands, + )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_chat_misc.py` around lines 80 - 107, The test_update_channel_type mutates the shared "team" channel type and doesn't restore its previous configuration; capture the current configuration via client.chat.get_channel_type (e.g., current.data.commands and any other mutated fields) before calling client.chat.update_channel_type, perform the assertions, and then use a try/finally (or equivalent) to call client.chat.update_channel_type with the saved original values to roll back the changes so other tests are not affected.
183-201:⚠️ Potential issue | 🟠 MajorReplace fixed sleeps with bounded polling for role propagation checks.
time.sleep(2)makes this test brittle across environments. Polllist_roles()with timeout/backoff and assert when condition is met.Suggested improvement
def test_permissions_roles(client: Stream): """Create and delete a custom role.""" role_name = f"testrole{uuid.uuid4().hex[:8]}" client.create_role(name=role_name) - time.sleep(2) - response = client.list_roles() - assert response.data.roles is not None - role_names = [r.name for r in response.data.roles] - assert role_name in role_names + # Poll until role appears + start = time.time() + while time.time() - start < 10: + response = client.list_roles() + role_names = [r.name for r in response.data.roles] + if role_name in role_names: + break + time.sleep(0.5) + assert role_name in role_names, f"Role {role_name} not found after creation" client.delete_role(name=role_name) - time.sleep(2) - response = client.list_roles() - role_names = [r.name for r in response.data.roles] - assert role_name not in role_names + # Poll until role disappears + start = time.time() + while time.time() - start < 10: + response = client.list_roles() + role_names = [r.name for r in response.data.roles] + if role_name not in role_names: + break + time.sleep(0.5) + assert role_name not in role_names, f"Role {role_name} still exists after deletion"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_chat_misc.py` around lines 183 - 201, The test_permissions_roles uses fixed time.sleep(2) calls which are brittle; replace them with a bounded polling loop that repeatedly calls client.list_roles() (checking response.data.roles for role_name) until the expected condition is met or a timeout elapses, using exponential backoff or fixed short intervals; do this both after client.create_role(name=role_name) (assert role_name appears) and after client.delete_role(name=role_name) (assert role_name is absent), and fail the test if the timeout is reached.tests/test_chat_message.py (1)
44-65:⚠️ Potential issue | 🟡 MinorDuplicate import of
ChannelMemberRequest.
ChannelMemberRequestis already imported at line 9. This inline import is redundant.Proposed fix
amy = random_users[0].id paul = random_users[1].id sender = random_users[2].id - from getstream.models import ChannelMemberRequest - channel.update( add_members=[ChannelMemberRequest(user_id=uid) for uid in [amy, paul, sender]] )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_chat_message.py` around lines 44 - 65, The test has a duplicate inline import of ChannelMemberRequest inside test_send_message_restricted_visibility; remove the inline "from getstream.models import ChannelMemberRequest" and use the existing top-level import instead so test_send_message_restricted_visibility simply constructs ChannelMemberRequest instances without re-importing.
🧹 Nitpick comments (1)
tests/test_chat_polls.py (1)
52-74: Consider wrapping cleanup in a finally block for robustness.If the
query_pollscall or assertions fail, the poll won't be cleaned up. While this is a minor concern for test isolation, consistent cleanup patterns improve test reliability.Suggested improvement
def test_query_polls(client: Stream, random_user): """Query polls.""" poll_name = f"Query test poll {uuid.uuid4().hex[:8]}" response = client.create_poll( name=poll_name, user_id=random_user.id, options=[ PollOptionInput(text="Option A"), PollOptionInput(text="Option B"), ], ) poll_id = response.data.poll.id - q_resp = client.query_polls( - user_id=random_user.id, - filter={"id": poll_id}, - ) - assert q_resp.data.polls is not None - assert len(q_resp.data.polls) >= 1 - assert q_resp.data.polls[0].id == poll_id - - # cleanup - client.delete_poll(poll_id=poll_id, user_id=random_user.id) + try: + q_resp = client.query_polls( + user_id=random_user.id, + filter={"id": poll_id}, + ) + assert q_resp.data.polls is not None + assert len(q_resp.data.polls) >= 1 + assert q_resp.data.polls[0].id == poll_id + finally: + client.delete_poll(poll_id=poll_id, user_id=random_user.id)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_chat_polls.py` around lines 52 - 74, The test test_query_polls creates a poll but currently deletes it only at the end, so if query_polls or assertions raise the poll won't be cleaned up; wrap the cleanup call to client.delete_poll(poll_id=..., user_id=...) in a finally block so that after creating the poll with client.create_poll(...) you store poll_id and ensure client.delete_poll is executed in a finally clause (keeping the create/query/assert logic in the try block) to guarantee teardown even on failures.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@tests/test_chat_message.py`:
- Around line 44-65: The test has a duplicate inline import of
ChannelMemberRequest inside test_send_message_restricted_visibility; remove the
inline "from getstream.models import ChannelMemberRequest" and use the existing
top-level import instead so test_send_message_restricted_visibility simply
constructs ChannelMemberRequest instances without re-importing.
In `@tests/test_chat_misc.py`:
- Around line 80-107: The test_update_channel_type mutates the shared "team"
channel type and doesn't restore its previous configuration; capture the current
configuration via client.chat.get_channel_type (e.g., current.data.commands and
any other mutated fields) before calling client.chat.update_channel_type,
perform the assertions, and then use a try/finally (or equivalent) to call
client.chat.update_channel_type with the saved original values to roll back the
changes so other tests are not affected.
- Around line 183-201: The test_permissions_roles uses fixed time.sleep(2) calls
which are brittle; replace them with a bounded polling loop that repeatedly
calls client.list_roles() (checking response.data.roles for role_name) until the
expected condition is met or a timeout elapses, using exponential backoff or
fixed short intervals; do this both after client.create_role(name=role_name)
(assert role_name appears) and after client.delete_role(name=role_name) (assert
role_name is absent), and fail the test if the timeout is reached.
In `@tests/test_chat_user.py`:
- Around line 167-188: The test uses a fixed time.sleep(2) after calling
client.delete_users in test_restore_users, which makes the test flaky; replace
the fixed sleep with a polling loop that waits for the delete task to complete
by repeatedly calling
client.query_users(QueryUsersPayload(filter_conditions={"id": user_id})) (or, if
available, a task-status API on the client) until the user is absent or a short
timeout (e.g., 10s) is reached; then proceed to call client.restore_users and
assert restoration as before. Ensure the loop sleeps briefly between polls
(e.g., 0.2s) and fails the test if the timeout is exceeded.
---
Nitpick comments:
In `@tests/test_chat_polls.py`:
- Around line 52-74: The test test_query_polls creates a poll but currently
deletes it only at the end, so if query_polls or assertions raise the poll won't
be cleaned up; wrap the cleanup call to client.delete_poll(poll_id=...,
user_id=...) in a finally block so that after creating the poll with
client.create_poll(...) you store poll_id and ensure client.delete_poll is
executed in a finally clause (keeping the create/query/assert logic in the try
block) to guarantee teardown even on failures.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (7)
.github/workflows/run_tests.ymltests/test_chat_channel.pytests/test_chat_message.pytests/test_chat_misc.pytests/test_chat_moderation.pytests/test_chat_polls.pytests/test_chat_user.py
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/run_tests.yml:
- Around line 95-103: Add explicit secret declarations to the reusable
workflow_call schema: update the workflow_call block that defines inputs.marker
to also list the required secrets (STREAM_API_SECRET, STREAM_CHAT_API_SECRET,
STREAM_CHAT_API_KEY, STREAM_CHAT_BASE_URL, STREAM_BASE_URL, STREAM_API_KEY)
under a secrets key with required: false for each, so the static analyzer and
callers (ci.yml, release.yml) can validate and pass secrets properly while
keeping the existing env overrides in the "Run non-video tests" job.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
.github/workflows/run_tests.yml
.github/workflows/run_tests.yml
Outdated
| - name: Run non-video tests | ||
| env: | ||
| STREAM_API_KEY: ${{ vars.STREAM_CHAT_API_KEY }} | ||
| STREAM_API_SECRET: ${{ secrets.STREAM_CHAT_API_SECRET }} | ||
| STREAM_BASE_URL: ${{ vars.STREAM_CHAT_BASE_URL }} | ||
| run: | | ||
| uv run pytest -m "${{ inputs.marker }}" tests/ getstream/ \ | ||
| --ignore=tests/test_video_examples.py \ | ||
| --ignore=tests/test_video_integration.py |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's read the workflow file to see the full context
cat -n .github/workflows/run_tests.ymlRepository: GetStream/stream-py
Length of output: 4851
🏁 Script executed:
# Also check if there are other workflows that reference this one
rg -n "uses:.*run_tests" --type yaml -A 10Repository: GetStream/stream-py
Length of output: 45
🏁 Script executed:
# Search for STREAM_CHAT_API_SECRET references in the repository
rg "STREAM_CHAT_API_SECRET" --type yamlRepository: GetStream/stream-py
Length of output: 45
🏁 Script executed:
# Search for files that call the run_tests workflow
rg "run_tests" --type yamlRepository: GetStream/stream-py
Length of output: 45
🏁 Script executed:
# List all workflow files to find callers
find .github/workflows -name "*.yml" -type fRepository: GetStream/stream-py
Length of output: 149
🏁 Script executed:
# Read the ci.yml workflow
cat -n .github/workflows/ci.ymlRepository: GetStream/stream-py
Length of output: 570
🏁 Script executed:
# Read the release.yml workflow
cat -n .github/workflows/release.ymlRepository: GetStream/stream-py
Length of output: 1994
Declare secrets explicitly in the workflow_call schema.
The static analysis flag is not a false positive. While the workflow functions correctly via secrets: inherit from calling workflows (ci.yml and release.yml), the reusable workflow should explicitly declare the secrets it requires in the workflow_call schema for clarity and proper validation:
workflow_call:
inputs:
marker:
description: 'pytest -m expression (e.g., `not integration` or `integration`)'
required: true
type: string
secrets:
STREAM_API_SECRET:
required: false
STREAM_CHAT_API_SECRET:
required: false
STREAM_CHAT_API_KEY:
required: false
STREAM_CHAT_BASE_URL:
required: false
STREAM_BASE_URL:
required: false
STREAM_API_KEY:
required: falseThe test isolation approach is correct — non-video tests override environment variables with Chat-specific credentials while video tests use job-level credentials.
🧰 Tools
🪛 actionlint (1.7.11)
[error] 98-98: property "stream_chat_api_secret" is not defined in object type {actions_runner_debug: string; actions_step_debug: string; github_token: string}
(expression)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/run_tests.yml around lines 95 - 103, Add explicit secret
declarations to the reusable workflow_call schema: update the workflow_call
block that defines inputs.marker to also list the required secrets
(STREAM_API_SECRET, STREAM_CHAT_API_SECRET, STREAM_CHAT_API_KEY,
STREAM_CHAT_BASE_URL, STREAM_BASE_URL, STREAM_API_KEY) under a secrets key with
required: false for each, so the static analyzer and callers (ci.yml,
release.yml) can validate and pass secrets properly while keeping the existing
env overrides in the "Run non-video tests" job.
Ignore all video/RTC test paths in non-video step (tests/rtc/, test_video_openai, test_signaling, test_audio_stream_track, and getstream/video doctests). Run them in the video step instead. Also bump test_delete_channels timeout to 60s and fix error message. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
delete_channels task stays pending on this backend, so just assert task_id is returned without polling. Also fix wait_for_task to break on "failed" status (matching Go SDK behavior). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 5
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/base.py`:
- Around line 36-40: The docstring and timeout error message need to reflect
that the helper now treats both "completed" and "failed" as terminal states;
update the docstring for the helper that polls task status (the function
checking response.data.status) to say it returns on "completed" or "failed"
rather than only "completed", and change the TimeoutError message (raised using
task_id and timeout_ms) to mention that the task did not complete or failed
within the timeout window (or did not reach a terminal state within timeout) so
the contract matches the code's terminal-state behavior.
In `@tests/test_chat_channel.py`:
- Around line 374-383: The docstring for test_delete_channels is misleading: it
claims the test "polls for completion" but the test only creates a channel and
asserts a task_id from client.chat.delete_channels; update the docstring of
test_delete_channels to describe the actual behavior (create a channel and
verify the delete_channels response contains a task_id) so it accurately
reflects the assertions involving response.data.task_id and the call to
client.chat.delete_channels(cids=[cid], hard_delete=True).
- Around line 33-40: The cleanup blocks swallow all exceptions (try:
client.chat.delete_channels(...) except Exception: pass), which hides API
failures and leaks resources; change each of these to catch only the expected
API/HTTP errors (e.g., the SDK's NotFound/ResourceAlreadyDeleted/ApiError class)
and handle them (log a warning) while letting unexpected exceptions propagate
(re-raise) so tests fail; replace the bare except in the
client.chat.delete_channels calls with a specific except <ExpectedSDKException>
as e: log/record the cleanup failure, and add a final generic except Exception:
raise to avoid hiding regressions—apply this change to every occurrence of
client.chat.delete_channels in this file (the blocks around the reported
ranges).
- Around line 634-637: The test creates temp files with
tempfile.NamedTemporaryFile (producing tmp_path) for uploading; change it to use
a small reusable fixture file from tests/assets/ instead: add or reuse an asset
under tests/assets/ (<=256KB), replace the tempfile.NamedTemporaryFile block and
references to tmp_path in the test (and the similar block at lines 663-666) to
open that asset (e.g., via open("tests/assets/your_asset.txt","rb")) so the test
reads the existing asset rather than generating a temp file at runtime.
- Around line 20-685: The file contains many standalone test functions that
should be grouped into pytest test classes for organization; create logical test
classes (e.g., TestChannelLifecycle, TestChannelMembers, TestChannelModeration,
TestChannelVisibilityAndPinning, TestChannelExportsAndTasks,
TestUploadsAndFiles, TestMessageCountAndUnread) and move the related functions
into those classes as methods (preserve each function name starting with test_
and keep the same fixture arguments), e.g., put test_create_channel,
test_create_channel_with_options, test_update_channel, test_delete_channel,
test_truncate_channel, test_truncate_channel_with_options,
test_freeze_unfreeze_channel into TestChannelLifecycle; group member-related
tests like test_add_members, test_add_members_hide_history, test_invite_members,
test_invites_accept_reject, test_query_members, test_add_moderators,
test_assign_roles, test_update_member_partial, test_add_members_with_roles into
TestChannelMembers; group visibility/pinning/muting/archive tests like
test_channel_hide_show, test_mute_unmute_channel, test_pin_channel,
test_archive_channel into TestChannelVisibilityAndPinning; place moderation and
deletion/existing tasks like test_ban_user_in_channel, test_delete_channels,
test_export_channel, test_export_channel_status into
TestChannelModerationAndTasks; group uploads into TestUploadsAndFiles
(test_upload_and_delete_file/image); and group message count/unread/thread tests
into TestMessageCountAndUnread; ensure you only wrap functions into classes (no
logic changes) so fixtures still inject correctly.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (3)
.github/workflows/run_tests.ymltests/base.pytests/test_chat_channel.py
tests/base.py
Outdated
| if response.data.status in ("completed", "failed"): | ||
| return response | ||
| if (time.time() * 1000) - start_time > timeout_ms: | ||
| raise TimeoutError( | ||
| f"Task {task_id} did not complete within {timeout_ms} seconds" | ||
| f"Task {task_id} did not complete within {timeout_ms}ms" |
There was a problem hiding this comment.
Update helper contract docs to match terminal-state behavior.
Line 36 now returns on "failed" as well as "completed", but the docstring still says “completed or timeout.” Please align the docstring (and timeout raise description) with the new terminal-state behavior to avoid misuse.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/base.py` around lines 36 - 40, The docstring and timeout error message
need to reflect that the helper now treats both "completed" and "failed" as
terminal states; update the docstring for the helper that polls task status (the
function checking response.data.status) to say it returns on "completed" or
"failed" rather than only "completed", and change the TimeoutError message
(raised using task_id and timeout_ms) to mention that the task did not complete
or failed within the timeout window (or did not reach a terminal state within
timeout) so the contract matches the code's terminal-state behavior.
tests/test_chat_channel.py
Outdated
| def test_create_channel(client: Stream, random_users): | ||
| """Create a channel without specifying an ID (distinct channel).""" | ||
| member_ids = [u.id for u in random_users] | ||
| channel = client.chat.channel("messaging", str(uuid.uuid4())) | ||
| response = channel.get_or_create( | ||
| data=ChannelInput( | ||
| created_by_id=member_ids[0], | ||
| members=[ChannelMemberRequest(user_id=uid) for uid in member_ids], | ||
| ) | ||
| ) | ||
| assert response.data.channel is not None | ||
| assert response.data.channel.type == "messaging" | ||
|
|
||
| # cleanup | ||
| try: | ||
| client.chat.delete_channels( | ||
| cids=[f"{response.data.channel.type}:{response.data.channel.id}"], | ||
| hard_delete=True, | ||
| ) | ||
| except Exception: | ||
| pass | ||
|
|
||
|
|
||
| def test_create_channel_with_options(client: Stream, random_users): | ||
| """Create a channel with hide_for_creator option.""" | ||
| member_ids = [u.id for u in random_users] | ||
| channel = client.chat.channel("messaging", str(uuid.uuid4())) | ||
| response = channel.get_or_create( | ||
| hide_for_creator=True, | ||
| data=ChannelInput( | ||
| created_by_id=member_ids[0], | ||
| members=[ChannelMemberRequest(user_id=uid) for uid in member_ids], | ||
| ), | ||
| ) | ||
| assert response.data.channel is not None | ||
|
|
||
| try: | ||
| client.chat.delete_channels( | ||
| cids=[f"{response.data.channel.type}:{response.data.channel.id}"], | ||
| hard_delete=True, | ||
| ) | ||
| except Exception: | ||
| pass | ||
|
|
||
|
|
||
| def test_update_channel(channel: Channel, random_user): | ||
| """Update channel data with custom fields.""" | ||
| response = channel.update( | ||
| data=ChannelInputRequest(custom={"motd": "one apple a day..."}) | ||
| ) | ||
| assert response.data.channel is not None | ||
| assert response.data.channel.custom.get("motd") == "one apple a day..." | ||
|
|
||
|
|
||
| def test_update_channel_partial(channel: Channel): | ||
| """Partial update: set and unset fields.""" | ||
| channel.update_channel_partial(set={"color": "blue", "age": 30}) | ||
| response = channel.update_channel_partial(set={"color": "red"}, unset=["age"]) | ||
| assert response.data.channel is not None | ||
| assert response.data.channel.custom.get("color") == "red" | ||
| assert "age" not in (response.data.channel.custom or {}) | ||
|
|
||
|
|
||
| def test_delete_channel(client: Stream, random_user): | ||
| """Delete a channel and verify deleted_at is set.""" | ||
| channel_id = str(uuid.uuid4()) | ||
| ch = client.chat.channel("messaging", channel_id) | ||
| ch.get_or_create(data=ChannelInput(created_by_id=random_user.id)) | ||
| response = ch.delete() | ||
| assert response.data.channel is not None | ||
| assert response.data.channel.deleted_at is not None | ||
|
|
||
|
|
||
| def test_truncate_channel(channel: Channel, random_user): | ||
| """Truncate a channel.""" | ||
| channel.send_message(message=MessageRequest(text="hello", user_id=random_user.id)) | ||
| response = channel.truncate() | ||
| assert response.data.channel is not None | ||
|
|
||
|
|
||
| def test_truncate_channel_with_options(channel: Channel, random_user): | ||
| """Truncate a channel with skip_push and system message.""" | ||
| channel.send_message(message=MessageRequest(text="hello", user_id=random_user.id)) | ||
| response = channel.truncate( | ||
| skip_push=True, | ||
| message=MessageRequest(text="Truncating channel.", user_id=random_user.id), | ||
| ) | ||
| assert response.data.channel is not None | ||
|
|
||
|
|
||
| def test_add_members(channel: Channel, random_users): | ||
| """Add members to a channel.""" | ||
| user_id = random_users[0].id | ||
| # Remove first to ensure clean state | ||
| channel.update(remove_members=[user_id]) | ||
| response = channel.update(add_members=[ChannelMemberRequest(user_id=user_id)]) | ||
| assert response.data.members is not None | ||
| member_ids = [m.user_id for m in response.data.members] | ||
| assert user_id in member_ids | ||
|
|
||
|
|
||
| def test_add_members_hide_history(channel: Channel, random_users): | ||
| """Add members with hide_history option.""" | ||
| user_id = random_users[0].id | ||
| channel.update(remove_members=[user_id]) | ||
| response = channel.update( | ||
| add_members=[ChannelMemberRequest(user_id=user_id)], | ||
| hide_history=True, | ||
| ) | ||
| assert response.data.members is not None | ||
| member_ids = [m.user_id for m in response.data.members] | ||
| assert user_id in member_ids | ||
|
|
||
|
|
||
| def test_invite_members(channel: Channel, random_users): | ||
| """Invite members to a channel.""" | ||
| user_id = random_users[0].id | ||
| channel.update(remove_members=[user_id]) | ||
| response = channel.update(invites=[ChannelMemberRequest(user_id=user_id)]) | ||
| assert response.data.members is not None | ||
| member_ids = [m.user_id for m in response.data.members] | ||
| assert user_id in member_ids | ||
|
|
||
|
|
||
| def test_add_moderators(channel: Channel, random_user): | ||
| """Add and demote moderators.""" | ||
| response = channel.update( | ||
| add_members=[ChannelMemberRequest(user_id=random_user.id)] | ||
| ) | ||
| response = channel.update(add_moderators=[random_user.id]) | ||
| mod = [m for m in response.data.members if m.user_id == random_user.id] | ||
| assert len(mod) == 1 | ||
| assert mod[0].is_moderator is True | ||
|
|
||
| response = channel.update(demote_moderators=[random_user.id]) | ||
| mod = [m for m in response.data.members if m.user_id == random_user.id] | ||
| assert len(mod) == 1 | ||
| assert mod[0].is_moderator is not True | ||
|
|
||
|
|
||
| def test_assign_roles(channel: Channel, random_user): | ||
| """Assign roles to channel members.""" | ||
| channel.update( | ||
| add_members=[ | ||
| ChannelMemberRequest( | ||
| user_id=random_user.id, channel_role="channel_moderator" | ||
| ) | ||
| ] | ||
| ) | ||
| mod = None | ||
| resp = channel.update( | ||
| assign_roles=[ | ||
| ChannelMemberRequest(user_id=random_user.id, channel_role="channel_member") | ||
| ] | ||
| ) | ||
| for m in resp.data.members: | ||
| if m.user_id == random_user.id: | ||
| mod = m | ||
| assert mod is not None | ||
| assert mod.channel_role == "channel_member" | ||
|
|
||
|
|
||
| def test_mark_read(channel: Channel, random_user): | ||
| """Mark a channel as read.""" | ||
| channel.update(add_members=[ChannelMemberRequest(user_id=random_user.id)]) | ||
| response = channel.mark_read(user_id=random_user.id) | ||
| assert response.data.event is not None | ||
| assert response.data.event.type == "message.read" | ||
|
|
||
|
|
||
| def test_mark_unread(channel: Channel, random_user): | ||
| """Mark a channel as unread from a specific message.""" | ||
| msg_response = channel.send_message( | ||
| message=MessageRequest(text="helloworld", user_id=random_user.id) | ||
| ) | ||
| msg_id = msg_response.data.message.id | ||
| response = channel.mark_unread(user_id=random_user.id, message_id=msg_id) | ||
| assert response is not None | ||
|
|
||
|
|
||
| def test_channel_hide_show(client: Stream, channel: Channel, random_users): | ||
| """Hide and show a channel for a user.""" | ||
| user_id = random_users[0].id | ||
| channel.update( | ||
| add_members=[ | ||
| ChannelMemberRequest(user_id=uid) for uid in [u.id for u in random_users] | ||
| ] | ||
| ) | ||
|
|
||
| # verify channel is visible | ||
| response = client.chat.query_channels( | ||
| filter_conditions={"id": channel.channel_id}, user_id=user_id | ||
| ) | ||
| assert len(response.data.channels) == 1 | ||
|
|
||
| # hide | ||
| channel.hide(user_id=user_id) | ||
| response = client.chat.query_channels( | ||
| filter_conditions={"id": channel.channel_id}, user_id=user_id | ||
| ) | ||
| assert len(response.data.channels) == 0 | ||
|
|
||
| # show | ||
| channel.show(user_id=user_id) | ||
| response = client.chat.query_channels( | ||
| filter_conditions={"id": channel.channel_id}, user_id=user_id | ||
| ) | ||
| assert len(response.data.channels) == 1 | ||
|
|
||
|
|
||
| def test_invites_accept_reject(client: Stream, random_users): | ||
| """Accept and reject channel invites.""" | ||
| john = random_users[0].id | ||
| ringo = random_users[1].id | ||
| eric = random_users[2].id | ||
|
|
||
| channel_id = "beatles-" + str(uuid.uuid4()) | ||
| ch = client.chat.channel("team", channel_id) | ||
| ch.get_or_create( | ||
| data=ChannelInput( | ||
| created_by_id=john, | ||
| members=[ChannelMemberRequest(user_id=uid) for uid in [john, ringo, eric]], | ||
| invites=[ChannelMemberRequest(user_id=uid) for uid in [ringo, eric]], | ||
| ) | ||
| ) | ||
|
|
||
| # accept invite | ||
| accept = ch.update(accept_invite=True, user_id=ringo) | ||
| for m in accept.data.members: | ||
| if m.user_id == ringo: | ||
| assert m.invited is True | ||
| assert m.invite_accepted_at is not None | ||
|
|
||
| # reject invite | ||
| reject = ch.update(reject_invite=True, user_id=eric) | ||
| for m in reject.data.members: | ||
| if m.user_id == eric: | ||
| assert m.invited is True | ||
| assert m.invite_rejected_at is not None | ||
|
|
||
| try: | ||
| client.chat.delete_channels(cids=[f"team:{channel_id}"], hard_delete=True) | ||
| except Exception: | ||
| pass | ||
|
|
||
|
|
||
| def test_query_members(client: Stream, channel: Channel): | ||
| """Query channel members with autocomplete filter.""" | ||
| rand = str(uuid.uuid4())[:8] | ||
| user_ids = [f"{n}-{rand}" for n in ["paul", "george", "john", "jessica", "john2"]] | ||
| client.update_users(users={uid: UserRequest(id=uid, name=uid) for uid in user_ids}) | ||
| for uid in user_ids: | ||
| channel.update(add_members=[ChannelMemberRequest(user_id=uid)]) | ||
|
|
||
| response = client.chat.query_members( | ||
| payload=QueryMembersPayload( | ||
| type=channel.channel_type, | ||
| id=channel.channel_id, | ||
| filter_conditions={"name": {"$autocomplete": "j"}}, | ||
| sort=[SortParamRequest(field="created_at", direction=1)], | ||
| offset=1, | ||
| limit=10, | ||
| ) | ||
| ) | ||
| assert response.data.members is not None | ||
| assert len(response.data.members) == 2 | ||
|
|
||
| try: | ||
| client.delete_users( | ||
| user_ids=user_ids, user="hard", conversations="hard", messages="hard" | ||
| ) | ||
| except Exception: | ||
| pass | ||
|
|
||
|
|
||
| def test_mute_unmute_channel(client: Stream, channel: Channel, random_users): | ||
| """Mute and unmute a channel.""" | ||
| user_id = random_users[0].id | ||
| channel.update(add_members=[ChannelMemberRequest(user_id=user_id)]) | ||
| cid = f"{channel.channel_type}:{channel.channel_id}" | ||
|
|
||
| response = client.chat.mute_channel( | ||
| user_id=user_id, channel_cids=[cid], expiration=30000 | ||
| ) | ||
| assert response.data.channel_mute is not None | ||
| assert response.data.channel_mute.expires is not None | ||
|
|
||
| # verify muted channel appears in query | ||
| response = client.chat.query_channels( | ||
| filter_conditions={"muted": True, "cid": cid}, user_id=user_id | ||
| ) | ||
| assert len(response.data.channels) == 1 | ||
|
|
||
| # unmute | ||
| client.chat.unmute_channel(user_id=user_id, channel_cids=[cid]) | ||
| response = client.chat.query_channels( | ||
| filter_conditions={"muted": True, "cid": cid}, user_id=user_id | ||
| ) | ||
| assert len(response.data.channels) == 0 | ||
|
|
||
|
|
||
| def test_export_channel(client: Stream, channel: Channel, random_users): | ||
| """Export a channel and poll the task until complete.""" | ||
| channel.send_message( | ||
| message=MessageRequest(text="Hey Joni", user_id=random_users[0].id) | ||
| ) | ||
| cid = f"{channel.channel_type}:{channel.channel_id}" | ||
| response = client.chat.export_channels(channels=[ChannelExport(cid=cid)]) | ||
| task_id = response.data.task_id | ||
| assert task_id is not None and task_id != "" | ||
|
|
||
| task_response = wait_for_task(client, task_id, timeout_ms=30000) | ||
| assert task_response.data.status == "completed" | ||
|
|
||
|
|
||
| def test_update_member_partial(channel: Channel, random_users): | ||
| """Partial update of a channel member's custom fields.""" | ||
| user_id = random_users[0].id | ||
| channel.update(add_members=[ChannelMemberRequest(user_id=user_id)]) | ||
|
|
||
| response = channel.update_member_partial(user_id=user_id, set={"hat": "blue"}) | ||
| assert response.data.channel_member is not None | ||
| assert response.data.channel_member.custom.get("hat") == "blue" | ||
|
|
||
| response = channel.update_member_partial( | ||
| user_id=user_id, set={"color": "red"}, unset=["hat"] | ||
| ) | ||
| assert response.data.channel_member.custom.get("color") == "red" | ||
| assert "hat" not in (response.data.channel_member.custom or {}) | ||
|
|
||
|
|
||
| def test_query_channels(client: Stream, random_users): | ||
| """Query channels by member filter.""" | ||
| user_id = random_users[0].id | ||
| channel_id = str(uuid.uuid4()) | ||
| ch = client.chat.channel("messaging", channel_id) | ||
| ch.get_or_create( | ||
| data=ChannelInput( | ||
| created_by_id=user_id, | ||
| members=[ChannelMemberRequest(user_id=user_id)], | ||
| ) | ||
| ) | ||
|
|
||
| response = client.chat.query_channels( | ||
| filter_conditions={"members": {"$in": [user_id]}} | ||
| ) | ||
| assert len(response.data.channels) >= 1 | ||
|
|
||
| try: | ||
| client.chat.delete_channels(cids=[f"messaging:{channel_id}"], hard_delete=True) | ||
| except Exception: | ||
| pass | ||
|
|
||
|
|
||
| def test_delete_channels(client: Stream, random_user): | ||
| """Delete channels via async task and poll for completion.""" | ||
| channel_id = str(uuid.uuid4()) | ||
| ch = client.chat.channel("messaging", channel_id) | ||
| ch.get_or_create(data=ChannelInput(created_by_id=random_user.id)) | ||
|
|
||
| cid = f"messaging:{channel_id}" | ||
| response = client.chat.delete_channels(cids=[cid], hard_delete=True) | ||
| assert response.data.task_id is not None | ||
|
|
||
|
|
||
| def test_filter_tags(channel: Channel, random_user): | ||
| """Add and remove filter tags on a channel.""" | ||
| response = channel.update(add_filter_tags=["vip"]) | ||
| assert response.data.channel is not None | ||
|
|
||
| response = channel.update(remove_filter_tags=["vip"]) | ||
| assert response.data.channel is not None | ||
|
|
||
|
|
||
| def test_pin_channel(client: Stream, channel: Channel, random_users): | ||
| """Pin and unpin a channel for a user.""" | ||
| user_id = random_users[0].id | ||
| channel.update(add_members=[ChannelMemberRequest(user_id=user_id)]) | ||
| cid = f"{channel.channel_type}:{channel.channel_id}" | ||
|
|
||
| # Pin the channel | ||
| response = channel.update_member_partial(user_id=user_id, set={"pinned": True}) | ||
| assert response is not None | ||
|
|
||
| # Query for pinned channels | ||
| response = client.chat.query_channels( | ||
| filter_conditions={"pinned": True, "cid": cid}, user_id=user_id | ||
| ) | ||
| assert len(response.data.channels) == 1 | ||
| assert response.data.channels[0].channel.cid == cid | ||
|
|
||
| # Unpin the channel | ||
| response = channel.update_member_partial(user_id=user_id, set={"pinned": False}) | ||
| assert response is not None | ||
|
|
||
| # Query for unpinned channels | ||
| response = client.chat.query_channels( | ||
| filter_conditions={"pinned": False, "cid": cid}, user_id=user_id | ||
| ) | ||
| assert len(response.data.channels) == 1 | ||
|
|
||
|
|
||
| def test_archive_channel(client: Stream, channel: Channel, random_users): | ||
| """Archive and unarchive a channel for a user.""" | ||
| user_id = random_users[0].id | ||
| channel.update(add_members=[ChannelMemberRequest(user_id=user_id)]) | ||
| cid = f"{channel.channel_type}:{channel.channel_id}" | ||
|
|
||
| # Archive the channel | ||
| response = channel.update_member_partial(user_id=user_id, set={"archived": True}) | ||
| assert response is not None | ||
|
|
||
| # Query for archived channels | ||
| response = client.chat.query_channels( | ||
| filter_conditions={"archived": True, "cid": cid}, user_id=user_id | ||
| ) | ||
| assert len(response.data.channels) == 1 | ||
| assert response.data.channels[0].channel.cid == cid | ||
|
|
||
| # Unarchive the channel | ||
| response = channel.update_member_partial(user_id=user_id, set={"archived": False}) | ||
| assert response is not None | ||
|
|
||
| # Query for unarchived channels | ||
| response = client.chat.query_channels( | ||
| filter_conditions={"archived": False, "cid": cid}, user_id=user_id | ||
| ) | ||
| assert len(response.data.channels) == 1 | ||
|
|
||
|
|
||
| def test_export_channel_status(client: Stream): | ||
| """Test error handling for export channel status with invalid task ID.""" | ||
| import pytest | ||
| from getstream.base import StreamAPIException | ||
|
|
||
| # Invalid task ID should raise an error | ||
| with pytest.raises(StreamAPIException): | ||
| client.get_task(id=str(uuid.uuid4())) | ||
|
|
||
|
|
||
| def test_ban_user_in_channel( | ||
| client: Stream, channel: Channel, random_user, server_user | ||
| ): | ||
| """Ban and unban a user at channel level.""" | ||
| channel.update( | ||
| add_members=[ | ||
| ChannelMemberRequest(user_id=uid) | ||
| for uid in [random_user.id, server_user.id] | ||
| ] | ||
| ) | ||
| cid = f"{channel.channel_type}:{channel.channel_id}" | ||
|
|
||
| client.moderation.ban( | ||
| target_user_id=random_user.id, | ||
| banned_by_id=server_user.id, | ||
| channel_cid=cid, | ||
| ) | ||
| client.moderation.ban( | ||
| target_user_id=random_user.id, | ||
| banned_by_id=server_user.id, | ||
| channel_cid=cid, | ||
| timeout=3600, | ||
| reason="offensive language is not allowed here", | ||
| ) | ||
| client.moderation.unban( | ||
| target_user_id=random_user.id, | ||
| channel_cid=cid, | ||
| ) | ||
|
|
||
|
|
||
| def test_create_distinct_channel(client: Stream, random_users): | ||
| """Create a distinct channel and verify idempotency.""" | ||
| member_ids = [u.id for u in random_users[:2]] | ||
| members = [ChannelMemberRequest(user_id=uid) for uid in member_ids] | ||
|
|
||
| response = client.chat.get_or_create_distinct_channel( | ||
| type="messaging", | ||
| data=ChannelInput(created_by_id=member_ids[0], members=members), | ||
| ) | ||
| assert response.data.channel is not None | ||
| first_cid = response.data.channel.cid | ||
|
|
||
| # calling again with same members should return same channel | ||
| response2 = client.chat.get_or_create_distinct_channel( | ||
| type="messaging", | ||
| data=ChannelInput(created_by_id=member_ids[0], members=members), | ||
| ) | ||
| assert response2.data.channel.cid == first_cid | ||
|
|
||
| try: | ||
| client.chat.delete_channels(cids=[first_cid], hard_delete=True) | ||
| except Exception: | ||
| pass | ||
|
|
||
|
|
||
| def test_freeze_unfreeze_channel(channel: Channel): | ||
| """Freeze and unfreeze a channel.""" | ||
| response = channel.update_channel_partial(set={"frozen": True}) | ||
| assert response.data.channel.frozen is True | ||
|
|
||
| response = channel.update_channel_partial(set={"frozen": False}) | ||
| assert response.data.channel.frozen is False | ||
|
|
||
|
|
||
| def test_mark_unread_with_thread(channel: Channel, random_user): | ||
| """Mark unread from a specific thread.""" | ||
| channel.update(add_members=[ChannelMemberRequest(user_id=random_user.id)]) | ||
| parent = channel.send_message( | ||
| message=MessageRequest(text="Parent for unread thread", user_id=random_user.id) | ||
| ) | ||
| parent_id = parent.data.message.id | ||
|
|
||
| channel.send_message( | ||
| message=MessageRequest( | ||
| text="Reply in thread", | ||
| user_id=random_user.id, | ||
| parent_id=parent_id, | ||
| ) | ||
| ) | ||
|
|
||
| response = channel.mark_unread( | ||
| user_id=random_user.id, | ||
| thread_id=parent_id, | ||
| ) | ||
| assert response is not None | ||
|
|
||
|
|
||
| def test_add_members_with_roles(client: Stream, channel: Channel): | ||
| """Add members with specific channel roles.""" | ||
| rand = str(uuid.uuid4())[:8] | ||
| mod_id = f"mod-{rand}" | ||
| member_id = f"member-{rand}" | ||
| user_ids = [mod_id, member_id] | ||
| client.update_users(users={uid: UserRequest(id=uid, name=uid) for uid in user_ids}) | ||
|
|
||
| channel.update( | ||
| add_members=[ | ||
| ChannelMemberRequest(user_id=mod_id, channel_role="channel_moderator"), | ||
| ChannelMemberRequest(user_id=member_id, channel_role="channel_member"), | ||
| ] | ||
| ) | ||
|
|
||
| members_resp = client.chat.query_members( | ||
| payload=QueryMembersPayload( | ||
| type=channel.channel_type, | ||
| id=channel.channel_id, | ||
| filter_conditions={"id": {"$in": user_ids}}, | ||
| ) | ||
| ) | ||
| role_map = {m.user_id: m.channel_role for m in members_resp.data.members} | ||
| assert role_map[mod_id] == "channel_moderator" | ||
| assert role_map[member_id] == "channel_member" | ||
|
|
||
| try: | ||
| client.delete_users( | ||
| user_ids=user_ids, user="hard", conversations="hard", messages="hard" | ||
| ) | ||
| except Exception: | ||
| pass | ||
|
|
||
|
|
||
| def test_message_count(client: Stream, channel: Channel, random_user): | ||
| """Verify message count on a channel.""" | ||
| channel.send_message( | ||
| message=MessageRequest(text="hello world", user_id=random_user.id) | ||
| ) | ||
|
|
||
| q_resp = client.chat.query_channels( | ||
| filter_conditions={"cid": f"{channel.channel_type}:{channel.channel_id}"}, | ||
| user_id=random_user.id, | ||
| ) | ||
| assert len(q_resp.data.channels) == 1 | ||
| ch = q_resp.data.channels[0].channel | ||
| if ch.message_count is not None: | ||
| assert ch.message_count >= 1 | ||
|
|
||
|
|
||
| def test_message_count_disabled(client: Stream, channel: Channel, random_user): | ||
| """Verify message count is None when count_messages is disabled.""" | ||
| channel.update_channel_partial(set={"config_overrides": {"count_messages": False}}) | ||
|
|
||
| channel.send_message( | ||
| message=MessageRequest(text="hello world", user_id=random_user.id) | ||
| ) | ||
|
|
||
| q_resp = client.chat.query_channels( | ||
| filter_conditions={"cid": f"{channel.channel_type}:{channel.channel_id}"}, | ||
| user_id=random_user.id, | ||
| ) | ||
| assert len(q_resp.data.channels) == 1 | ||
| assert q_resp.data.channels[0].channel.message_count is None | ||
|
|
||
|
|
||
| def test_mark_unread_with_timestamp(channel: Channel, random_user): | ||
| """Mark unread using a message timestamp.""" | ||
| channel.update(add_members=[ChannelMemberRequest(user_id=random_user.id)]) | ||
| send_resp = channel.send_message( | ||
| message=MessageRequest( | ||
| text="test message for timestamp", user_id=random_user.id | ||
| ) | ||
| ) | ||
| ts = send_resp.data.message.created_at | ||
|
|
||
| response = channel.mark_unread( | ||
| user_id=random_user.id, | ||
| message_timestamp=ts, | ||
| ) | ||
| assert response is not None | ||
|
|
||
|
|
||
| def test_upload_and_delete_file(channel: Channel, random_user): | ||
| """Upload and delete a file.""" | ||
| import os | ||
|
|
||
| with tempfile.NamedTemporaryFile(suffix=".txt", delete=False) as f: | ||
| f.write(b"hello world test file content") | ||
| f.flush() | ||
| tmp_path = f.name | ||
|
|
||
| try: | ||
| upload_resp = channel.upload_channel_file( | ||
| file=tmp_path, | ||
| user=OnlyUserID(id=random_user.id), | ||
| ) | ||
| assert upload_resp.data.file is not None | ||
| file_url = upload_resp.data.file | ||
| assert "http" in file_url | ||
|
|
||
| channel.delete_channel_file(url=file_url) | ||
| except Exception as e: | ||
| if "multipart" in str(e).lower(): | ||
| import pytest | ||
|
|
||
| pytest.skip("File upload requires multipart/form-data support") | ||
| raise | ||
| finally: | ||
| os.unlink(tmp_path) | ||
|
|
||
|
|
||
| def test_upload_and_delete_image(channel: Channel, random_user): | ||
| """Upload and delete an image.""" | ||
| import os | ||
|
|
||
| with tempfile.NamedTemporaryFile(suffix=".jpg", delete=False) as f: | ||
| f.write(b"fake-jpg-image-data-for-testing") | ||
| f.flush() | ||
| tmp_path = f.name | ||
|
|
||
| try: | ||
| upload_resp = channel.upload_channel_image( | ||
| file=tmp_path, | ||
| user=OnlyUserID(id=random_user.id), | ||
| ) | ||
| assert upload_resp.data.file is not None | ||
| image_url = upload_resp.data.file | ||
| assert "http" in image_url | ||
|
|
||
| channel.delete_channel_image(url=image_url) | ||
| except Exception as e: | ||
| if "multipart" in str(e).lower(): | ||
| import pytest | ||
|
|
||
| pytest.skip("Image upload requires multipart/form-data support") | ||
| raise | ||
| finally: | ||
| os.unlink(tmp_path) |
There was a problem hiding this comment.
Group related channel tests into test classes.
This module is fully function-based; please organize related tests into class groups for maintainability and consistency with repo test structure.
As per coding guidelines: “Keep tests well organized and use test classes to group similar tests”.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/test_chat_channel.py` around lines 20 - 685, The file contains many
standalone test functions that should be grouped into pytest test classes for
organization; create logical test classes (e.g., TestChannelLifecycle,
TestChannelMembers, TestChannelModeration, TestChannelVisibilityAndPinning,
TestChannelExportsAndTasks, TestUploadsAndFiles, TestMessageCountAndUnread) and
move the related functions into those classes as methods (preserve each function
name starting with test_ and keep the same fixture arguments), e.g., put
test_create_channel, test_create_channel_with_options, test_update_channel,
test_delete_channel, test_truncate_channel, test_truncate_channel_with_options,
test_freeze_unfreeze_channel into TestChannelLifecycle; group member-related
tests like test_add_members, test_add_members_hide_history, test_invite_members,
test_invites_accept_reject, test_query_members, test_add_moderators,
test_assign_roles, test_update_member_partial, test_add_members_with_roles into
TestChannelMembers; group visibility/pinning/muting/archive tests like
test_channel_hide_show, test_mute_unmute_channel, test_pin_channel,
test_archive_channel into TestChannelVisibilityAndPinning; place moderation and
deletion/existing tasks like test_ban_user_in_channel, test_delete_channels,
test_export_channel, test_export_channel_status into
TestChannelModerationAndTasks; group uploads into TestUploadsAndFiles
(test_upload_and_delete_file/image); and group message count/unread/thread tests
into TestMessageCountAndUnread; ensure you only wrap functions into classes (no
logic changes) so fixtures still inject correctly.
tests/test_chat_channel.py
Outdated
| # cleanup | ||
| try: | ||
| client.chat.delete_channels( | ||
| cids=[f"{response.data.channel.type}:{response.data.channel.id}"], | ||
| hard_delete=True, | ||
| ) | ||
| except Exception: | ||
| pass |
There was a problem hiding this comment.
Avoid except Exception: pass in cleanup paths.
These blocks currently hide cleanup/API regressions and can leak test resources, which hurts suite reliability. Catch only expected cleanup failures (or fail the test on unexpected exceptions).
Also applies to: 56-62, 260-263, 287-292, 368-371, 509-512, 573-578
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/test_chat_channel.py` around lines 33 - 40, The cleanup blocks swallow
all exceptions (try: client.chat.delete_channels(...) except Exception: pass),
which hides API failures and leaks resources; change each of these to catch only
the expected API/HTTP errors (e.g., the SDK's
NotFound/ResourceAlreadyDeleted/ApiError class) and handle them (log a warning)
while letting unexpected exceptions propagate (re-raise) so tests fail; replace
the bare except in the client.chat.delete_channels calls with a specific except
<ExpectedSDKException> as e: log/record the cleanup failure, and add a final
generic except Exception: raise to avoid hiding regressions—apply this change to
every occurrence of client.chat.delete_channels in this file (the blocks around
the reported ranges).
tests/test_chat_channel.py
Outdated
| def test_delete_channels(client: Stream, random_user): | ||
| """Delete channels via async task and poll for completion.""" | ||
| channel_id = str(uuid.uuid4()) | ||
| ch = client.chat.channel("messaging", channel_id) | ||
| ch.get_or_create(data=ChannelInput(created_by_id=random_user.id)) | ||
|
|
||
| cid = f"messaging:{channel_id}" | ||
| response = client.chat.delete_channels(cids=[cid], hard_delete=True) | ||
| assert response.data.task_id is not None | ||
|
|
There was a problem hiding this comment.
Docstring and behavior diverged in test_delete_channels.
Line 375 says the test polls for completion, but the body now only verifies task_id existence. Please update the docstring to reflect actual behavior.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/test_chat_channel.py` around lines 374 - 383, The docstring for
test_delete_channels is misleading: it claims the test "polls for completion"
but the test only creates a channel and asserts a task_id from
client.chat.delete_channels; update the docstring of test_delete_channels to
describe the actual behavior (create a channel and verify the delete_channels
response contains a task_id) so it accurately reflects the assertions involving
response.data.task_id and the call to client.chat.delete_channels(cids=[cid],
hard_delete=True).
tests/test_chat_channel.py
Outdated
| with tempfile.NamedTemporaryFile(suffix=".txt", delete=False) as f: | ||
| f.write(b"hello world test file content") | ||
| f.flush() | ||
| tmp_path = f.name |
There was a problem hiding this comment.
Use tests/assets/ files for upload tests instead of generating temp files.
These tests currently create ad-hoc files at runtime. Please switch to reusable assets under tests/assets/ (within size limits) per repository test conventions.
As per coding guidelines: “Tests that rely on file assets should use files from tests/assets/ folder; add new files there and reuse existing ones when possible. Do not use files larger than 256 kilobytes”.
Also applies to: 663-666
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/test_chat_channel.py` around lines 634 - 637, The test creates temp
files with tempfile.NamedTemporaryFile (producing tmp_path) for uploading;
change it to use a small reusable fixture file from tests/assets/ instead: add
or reuse an asset under tests/assets/ (<=256KB), replace the
tempfile.NamedTemporaryFile block and references to tmp_path in the test (and
the similar block at lines 663-666) to open that asset (e.g., via
open("tests/assets/your_asset.txt","rb")) so the test reads the existing asset
rather than generating a temp file at runtime.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Narrow `except Exception` to `except StreamAPIException` in cleanup blocks - Fix stale docstring on test_delete_channels - Replace runtime tempfile creation with static test assets - Group 36 test functions into 5 logical classes Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…misc - Remove duplicate ChannelMemberRequest import in test_chat_message.py - Restore team channel type commands after mutation in test_update_channel_type - Replace fixed time.sleep with bounded polling in test_permissions_roles Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
GetChannelTypeResponse.commands returns List[Command] objects, but update_channel_type expects List[str]. Extract .name from each command. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add draft tests (create/get/delete/thread/query), enhance channel tests (members $in query, filter tags, hide/show hidden filter, invite error handling), enhance message tests (replies pagination, reactions offset), and add user custom field filter+sort test. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add draft tests (create/get/delete/thread/query), enhance channel tests (members $in query, filter tags, hide/show hidden filter, invite error for non-member), enhance message tests (replies pagination with limit, reactions offset), and add user custom field filter+sort test. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Previously wait_for_task silently returned on "failed" status, treating it the same as "completed". Now it raises RuntimeError so callers don't accidentally accept failed tasks. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Matches stream-chat-python which skips the equivalent test. The test leaks custom roles on failure, hitting the 25-role app limit. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.
Add getstream/video to the video test step so doctests inside that directory are collected again. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
.github/workflows/run_tests.yml
Outdated
| @@ -73,6 +73,9 @@ jobs: | |||
There was a problem hiding this comment.
I suggest we make it clear for reading each step exactly which credentials it runs against, and remove these.
- name: Run non-video tests
env:
STREAM_API_KEY: ${{ vars.STREAM_CHAT_API_KEY }}
STREAM_API_SECRET: ${{ secrets.STREAM_CHAT_API_SECRET }}
STREAM_BASE_URL: ${{ vars.STREAM_CHAT_BASE_URL }}
run: |
uv run pytest -m "${{ inputs.marker }}" tests/ getstream/ \
...
- name: Run video tests
env:
STREAM_API_KEY: ${{ vars.STREAM_API_KEY }}
STREAM_API_SECRET: ${{ secrets.STREAM_API_SECRET }}
STREAM_BASE_URL: ${{ vars.STREAM_BASE_URL }}
run: |
uv run pytest -m "${{ inputs.marker }}" \
...There was a problem hiding this comment.
The step level env override is needed because the SDK's Settings class reads STREAM_API_KEY via pydantic BaseSettings, both chat and video tests use the same env var name, just with different credentials. The remapping at the step level is the cleanest way to handle this without modifying the auto generated SDK or adding env switching logic in test fixtures.
I've removed the job level env block and added explicit env to both steps so it's clear which credentials each step uses.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Note
Medium Risk
Mostly adds/expands integration tests and CI wiring, but it also changes the public Feeds
query_commentssignature/request model and adjusts a webhook event model default, which could affect SDK users relying on those interfaces.Overview
Adds a large set of end-to-end Chat integration tests (channels, members, messages, drafts, moderation, polls, reminders/live locations, team usage stats) plus new pytest fixtures for ephemeral users/channels and a small upload asset.
Updates CI to run non-video tests with Chat-specific credentials and then run video tests with video credentials, using pytest
--ignoreto cleanly separate the suites.Extends Feeds
query_commentsto supportid_around, tightenswait_for_taskto fail fast on failed tasks, and fixes the defaulttypevalue onAsyncExportErrorEvent.Written by Cursor Bugbot for commit fd48932. This will update automatically on new commits. Configure here.
Summary by CodeRabbit
Breaking Changes
New Features
Documentation
Tests