Skip to content

Add chat integration tests to match stream-chat-python test parity#219

Merged
mogita merged 20 commits intocha-1578_openapi-refactor-codegenfrom
match-chat-test-parity-codegen
Mar 4, 2026
Merged

Add chat integration tests to match stream-chat-python test parity#219
mogita merged 20 commits intocha-1578_openapi-refactor-codegenfrom
match-chat-test-parity-codegen

Conversation

@daksh-r
Copy link

@daksh-r daksh-r commented Mar 2, 2026

Note

Medium Risk
Mostly adds/expands integration tests and CI wiring, but it also changes the public Feeds query_comments signature/request model and adjusts a webhook event model default, which could affect SDK users relying on those interfaces.

Overview
Adds a large set of end-to-end Chat integration tests (channels, members, messages, drafts, moderation, polls, reminders/live locations, team usage stats) plus new pytest fixtures for ephemeral users/channels and a small upload asset.

Updates CI to run non-video tests with Chat-specific credentials and then run video tests with video credentials, using pytest --ignore to cleanly separate the suites.

Extends Feeds query_comments to support id_around, tightens wait_for_task to fail fast on failed tasks, and fixes the default type value on AsyncExportErrorEvent.

Written by Cursor Bugbot for commit fd48932. This will update automatically on new commits. Configure here.

Summary by CodeRabbit

  • Breaking Changes

    • v3 naming and type model changes (Request/Response suffixes), event renames, and cursor-based pagination.
  • New Features

    • Single SDK covering Chat, Feeds, Video, Moderation.
    • Webhooks: signature verification, event type extraction and parsing.
    • Video: start/stop recording with type, participant session metrics.
    • Feeds: pinned activities, friend-reactions options, restore actions.
    • User groups, reminders, live locations, team usage stats.
  • Documentation

    • Migration guide and 3.0.0b1 changelog added.
  • Tests

    • Extensive end-to-end test suites across chat, video, feeds, moderation, and webhooks.

@coderabbitai
Copy link

coderabbitai bot commented Mar 2, 2026

📝 Walkthrough

Walkthrough

Large v2→v3 migration: request/response payloads standardized to generated Request/.to_dict() and Response types; many new endpoints and optional parameters across Chat, Common, Feeds, Moderation, Video; new webhook module; expanded tests and CI/release workflow tweaks. Several method signatures extended or added.

Changes

Cohort / File(s) Summary
Release workflow
\.github/workflows/release.yml
Version extraction now derives VERSION from the first dist/getstream-*.tar.gz filename instead of invoking uvx -q hatch version.
Changelog & Migration docs
CHANGELOG.md, MIGRATION_v2_to_v3.md
Add v3.0.0b1 release notes and a detailed migration guide describing naming conventions, type suffix changes, and v2→v3 mapping.
Chat clients
getstream/chat/rest_client.py, getstream/chat/async_rest_client.py
Replaced ad‑hoc build_body_dict/build_query_param with Request model constructors + .to_dict(); renamed schedule_campaignstop_campaign; added query_future_channel_bans; expanded query_channels and many endpoints with new optional params (predefined_filter, filter_values, sort_values, push_level, skip_push); some return types adjusted.
Common clients
getstream/common/rest_client.py, getstream/common/async_rest_client.py
Migrated payload construction to Request models; added user‑group APIs and many user management endpoints; update_app accepts moderation_analytics_enabled; upsert_push_provider uses PushProviderRequest.
Feeds client & wrapper
getstream/feeds/rest_client.py, getstream/feeds/feeds.py
Standardized to Request models; added optional flags (copy_custom_to_notification, create_notification_activity, skip_push, include_expired_activities, visibility_tag, friend_reactions_options); added update_activities_partial_batch, restore_activity, query_pinned_activities; Feed.get_or_create accepts friend_reactions_options.
Moderation clients
getstream/moderation/rest_client.py, getstream/moderation/async_rest_client.py
Switched to Request/Payload DTOs for bodies; parameter types changed to Payload/Request variants; upsert_moderation_rule gains action_sequences and optional action; submit_action and related endpoints accept structured payloads and an added flag field.
Video clients & call wrappers
getstream/video/rest_client.py, getstream/video/async_rest_client.py, getstream/video/call.py, getstream/video/async_call.py
Added recording endpoints requiring recording_type (start/stop), participant session metrics and queries; SIP inbound paths refactored to inbound_trunks/inbound_routing_rules; migrated payloads to Request models; call wrappers forward new methods.
Video RTC & utils
getstream/video/rtc/coordinator/ws.py, getstream/video/rtc/track_util.py
WebSocket client type changed to ClientConnection with a post-connection assertion; refined AudioFormat.validate return typing using typing.cast.
Webhooks & utilities
getstream/webhook.py, getstream/utils/__init__.py
New webhook module with 100+ EVENT_TYPE_* constants, get_event_type, parse_webhook_event, and verify_webhook_signature; build_query_param improved to handle datetimes and lists via internal _serialize_query_value.
Tests & fixtures
tests/conftest.py, tests/test_*.py, tests/base.py
Extensive test additions/updates: new fixtures (random_user, random_users, server_user, channel) and many integration tests for chat (channels, messages, moderation, polls, reminders), users, feeds, moderation, video; test harness tweaks (task wait behavior) and video tests updated to call recordings with recording_type="composite".
CI test workflow
\.github/workflows/run_tests.yml
Test job split into non‑video and video steps; added STREAM_CHAT_API_KEY/SECRET/BASE_URL env vars and separate runs for video/non-video suites.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~75 minutes

Poem

🐰 From dicts we hop to Request models bright,
Responses wear suffixes and types take flight,
Webhooks now listen, feeds and calls extend,
Tests bloom wide — new paths on every bend,
A v3 spring hops in, nimble through the night. 🌙

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 19.06% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The PR title 'Add chat integration tests to match stream-chat-python test parity' accurately reflects a major portion of the changeset, which includes extensive new test files (test_chat_*.py, test_video_examples.py). However, the changeset is substantially broader than just tests—it includes major SDK client codegen updates affecting many Chat/Common REST clients, new API endpoints, workflow changes, and documentation. The title captures one significant aspect but is incomplete relative to the overall scope.
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch match-chat-test-parity-codegen

Add comprehensive test coverage for chat functionality matching
the old stream-chat-python SDK. Includes tests for channels,
messages, moderation, users, misc operations, reminders/locations,
and team usage stats. Also updates codegen for team usage stats
endpoint and undelete message fix.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- test_add_moderators: check is_moderator is not True (API returns None, not False)
- test_mute_user/test_mute_with_timeout: use mutes[0] not mute (MuteResponse has mutes list)
- test_create_reminder: response is ReminderResponseData directly, not wrapped
- test_update_reminder: use response.data.reminder (UpdateReminderResponse wraps it)
- skip test_delete_message_for_me: delete_for_me needs body param not query param
- skip test_query_message_flags: V2 moderation.flag() doesn't populate chat-level flags

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Non-video tests need their own base URL (chat.stream-io-api.com) separate
from the video base URL.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (4)
tests/test_chat_user.py (1)

167-188: ⚠️ Potential issue | 🟠 Major

Use task polling instead of a fixed sleep in restore flow.

delete_users is async (task-based); time.sleep(2) makes this test timing-sensitive and flaky under load.

Proposed stabilization
 def test_restore_users(client: Stream):
     """Delete a user and then restore them."""
     user_id = str(uuid.uuid4())
     client.update_users(users={user_id: UserRequest(id=user_id, name=user_id)})
-    client.delete_users(user_ids=[user_id])
-
-    # Wait for delete task
-    import time
-
-    time.sleep(2)
+    from tests.base import wait_for_task
+
+    delete_response = client.delete_users(user_ids=[user_id])
+    wait_for_task(client, delete_response.data.task_id, timeout_ms=30000)

     client.restore_users(user_ids=[user_id])
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/test_chat_user.py` around lines 167 - 188, The test uses a fixed
time.sleep(2) after calling client.delete_users in test_restore_users, which
makes the test flaky; replace the fixed sleep with a polling loop that waits for
the delete task to complete by repeatedly calling
client.query_users(QueryUsersPayload(filter_conditions={"id": user_id})) (or, if
available, a task-status API on the client) until the user is absent or a short
timeout (e.g., 10s) is reached; then proceed to call client.restore_users and
assert restoration as before. Ensure the loop sleeps briefly between polls
(e.g., 0.2s) and fails the test if the timeout is exceeded.
tests/test_chat_misc.py (2)

80-107: ⚠️ Potential issue | 🟠 Major

Restore team channel type after mutation to prevent cross-test pollution.

This test mutates shared channel-type commands and doesn't restore prior state, which can break later tests that assume default config.

Safer pattern with rollback
 def test_update_channel_type(client: Stream):
     """Update a channel type's configuration."""
     # Get current config to know the required fields
     current = client.chat.get_channel_type(name="team")
+    original_commands = list(current.data.commands or [])
-    response = client.chat.update_channel_type(
-        name="team",
-        automod=current.data.automod,
-        automod_behavior=current.data.automod_behavior,
-        max_message_length=current.data.max_message_length,
-        commands=["ban", "unban"],
-    )
-    assert response.data.commands is not None
-    assert "ban" in response.data.commands
-    assert "unban" in response.data.commands
+    try:
+        response = client.chat.update_channel_type(
+            name="team",
+            automod=current.data.automod,
+            automod_behavior=current.data.automod_behavior,
+            max_message_length=current.data.max_message_length,
+            commands=["ban", "unban"],
+        )
+        assert response.data.commands is not None
+        assert "ban" in response.data.commands
+        assert "unban" in response.data.commands
+    finally:
+        client.chat.update_channel_type(
+            name="team",
+            automod=current.data.automod,
+            automod_behavior=current.data.automod_behavior,
+            max_message_length=current.data.max_message_length,
+            commands=original_commands,
+        )
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/test_chat_misc.py` around lines 80 - 107, The test_update_channel_type
mutates the shared "team" channel type and doesn't restore its previous
configuration; capture the current configuration via
client.chat.get_channel_type (e.g., current.data.commands and any other mutated
fields) before calling client.chat.update_channel_type, perform the assertions,
and then use a try/finally (or equivalent) to call
client.chat.update_channel_type with the saved original values to roll back the
changes so other tests are not affected.

183-201: ⚠️ Potential issue | 🟠 Major

Replace fixed sleeps with bounded polling for role propagation checks.

time.sleep(2) makes this test brittle across environments. Poll list_roles() with timeout/backoff and assert when condition is met.

Suggested improvement
 def test_permissions_roles(client: Stream):
     """Create and delete a custom role."""
     role_name = f"testrole{uuid.uuid4().hex[:8]}"

     client.create_role(name=role_name)
-    time.sleep(2)

-    response = client.list_roles()
-    assert response.data.roles is not None
-    role_names = [r.name for r in response.data.roles]
-    assert role_name in role_names
+    # Poll until role appears
+    start = time.time()
+    while time.time() - start < 10:
+        response = client.list_roles()
+        role_names = [r.name for r in response.data.roles]
+        if role_name in role_names:
+            break
+        time.sleep(0.5)
+    assert role_name in role_names, f"Role {role_name} not found after creation"

     client.delete_role(name=role_name)
-    time.sleep(2)

-    response = client.list_roles()
-    role_names = [r.name for r in response.data.roles]
-    assert role_name not in role_names
+    # Poll until role disappears
+    start = time.time()
+    while time.time() - start < 10:
+        response = client.list_roles()
+        role_names = [r.name for r in response.data.roles]
+        if role_name not in role_names:
+            break
+        time.sleep(0.5)
+    assert role_name not in role_names, f"Role {role_name} still exists after deletion"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/test_chat_misc.py` around lines 183 - 201, The test_permissions_roles
uses fixed time.sleep(2) calls which are brittle; replace them with a bounded
polling loop that repeatedly calls client.list_roles() (checking
response.data.roles for role_name) until the expected condition is met or a
timeout elapses, using exponential backoff or fixed short intervals; do this
both after client.create_role(name=role_name) (assert role_name appears) and
after client.delete_role(name=role_name) (assert role_name is absent), and fail
the test if the timeout is reached.
tests/test_chat_message.py (1)

44-65: ⚠️ Potential issue | 🟡 Minor

Duplicate import of ChannelMemberRequest.

ChannelMemberRequest is already imported at line 9. This inline import is redundant.

Proposed fix
     amy = random_users[0].id
     paul = random_users[1].id
     sender = random_users[2].id

-    from getstream.models import ChannelMemberRequest
-
     channel.update(
         add_members=[ChannelMemberRequest(user_id=uid) for uid in [amy, paul, sender]]
     )
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/test_chat_message.py` around lines 44 - 65, The test has a duplicate
inline import of ChannelMemberRequest inside
test_send_message_restricted_visibility; remove the inline "from
getstream.models import ChannelMemberRequest" and use the existing top-level
import instead so test_send_message_restricted_visibility simply constructs
ChannelMemberRequest instances without re-importing.
🧹 Nitpick comments (1)
tests/test_chat_polls.py (1)

52-74: Consider wrapping cleanup in a finally block for robustness.

If the query_polls call or assertions fail, the poll won't be cleaned up. While this is a minor concern for test isolation, consistent cleanup patterns improve test reliability.

Suggested improvement
 def test_query_polls(client: Stream, random_user):
     """Query polls."""
     poll_name = f"Query test poll {uuid.uuid4().hex[:8]}"
     response = client.create_poll(
         name=poll_name,
         user_id=random_user.id,
         options=[
             PollOptionInput(text="Option A"),
             PollOptionInput(text="Option B"),
         ],
     )
     poll_id = response.data.poll.id

-    q_resp = client.query_polls(
-        user_id=random_user.id,
-        filter={"id": poll_id},
-    )
-    assert q_resp.data.polls is not None
-    assert len(q_resp.data.polls) >= 1
-    assert q_resp.data.polls[0].id == poll_id
-
-    # cleanup
-    client.delete_poll(poll_id=poll_id, user_id=random_user.id)
+    try:
+        q_resp = client.query_polls(
+            user_id=random_user.id,
+            filter={"id": poll_id},
+        )
+        assert q_resp.data.polls is not None
+        assert len(q_resp.data.polls) >= 1
+        assert q_resp.data.polls[0].id == poll_id
+    finally:
+        client.delete_poll(poll_id=poll_id, user_id=random_user.id)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/test_chat_polls.py` around lines 52 - 74, The test test_query_polls
creates a poll but currently deletes it only at the end, so if query_polls or
assertions raise the poll won't be cleaned up; wrap the cleanup call to
client.delete_poll(poll_id=..., user_id=...) in a finally block so that after
creating the poll with client.create_poll(...) you store poll_id and ensure
client.delete_poll is executed in a finally clause (keeping the
create/query/assert logic in the try block) to guarantee teardown even on
failures.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@tests/test_chat_message.py`:
- Around line 44-65: The test has a duplicate inline import of
ChannelMemberRequest inside test_send_message_restricted_visibility; remove the
inline "from getstream.models import ChannelMemberRequest" and use the existing
top-level import instead so test_send_message_restricted_visibility simply
constructs ChannelMemberRequest instances without re-importing.

In `@tests/test_chat_misc.py`:
- Around line 80-107: The test_update_channel_type mutates the shared "team"
channel type and doesn't restore its previous configuration; capture the current
configuration via client.chat.get_channel_type (e.g., current.data.commands and
any other mutated fields) before calling client.chat.update_channel_type,
perform the assertions, and then use a try/finally (or equivalent) to call
client.chat.update_channel_type with the saved original values to roll back the
changes so other tests are not affected.
- Around line 183-201: The test_permissions_roles uses fixed time.sleep(2) calls
which are brittle; replace them with a bounded polling loop that repeatedly
calls client.list_roles() (checking response.data.roles for role_name) until the
expected condition is met or a timeout elapses, using exponential backoff or
fixed short intervals; do this both after client.create_role(name=role_name)
(assert role_name appears) and after client.delete_role(name=role_name) (assert
role_name is absent), and fail the test if the timeout is reached.

In `@tests/test_chat_user.py`:
- Around line 167-188: The test uses a fixed time.sleep(2) after calling
client.delete_users in test_restore_users, which makes the test flaky; replace
the fixed sleep with a polling loop that waits for the delete task to complete
by repeatedly calling
client.query_users(QueryUsersPayload(filter_conditions={"id": user_id})) (or, if
available, a task-status API on the client) until the user is absent or a short
timeout (e.g., 10s) is reached; then proceed to call client.restore_users and
assert restoration as before. Ensure the loop sleeps briefly between polls
(e.g., 0.2s) and fails the test if the timeout is exceeded.

---

Nitpick comments:
In `@tests/test_chat_polls.py`:
- Around line 52-74: The test test_query_polls creates a poll but currently
deletes it only at the end, so if query_polls or assertions raise the poll won't
be cleaned up; wrap the cleanup call to client.delete_poll(poll_id=...,
user_id=...) in a finally block so that after creating the poll with
client.create_poll(...) you store poll_id and ensure client.delete_poll is
executed in a finally clause (keeping the create/query/assert logic in the try
block) to guarantee teardown even on failures.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between bc791a0 and 9689bd2.

📒 Files selected for processing (7)
  • .github/workflows/run_tests.yml
  • tests/test_chat_channel.py
  • tests/test_chat_message.py
  • tests/test_chat_misc.py
  • tests/test_chat_moderation.py
  • tests/test_chat_polls.py
  • tests/test_chat_user.py

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.github/workflows/run_tests.yml:
- Around line 95-103: Add explicit secret declarations to the reusable
workflow_call schema: update the workflow_call block that defines inputs.marker
to also list the required secrets (STREAM_API_SECRET, STREAM_CHAT_API_SECRET,
STREAM_CHAT_API_KEY, STREAM_CHAT_BASE_URL, STREAM_BASE_URL, STREAM_API_KEY)
under a secrets key with required: false for each, so the static analyzer and
callers (ci.yml, release.yml) can validate and pass secrets properly while
keeping the existing env overrides in the "Run non-video tests" job.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 9689bd2 and f195524.

📒 Files selected for processing (1)
  • .github/workflows/run_tests.yml

Comment on lines +95 to +103
- name: Run non-video tests
env:
STREAM_API_KEY: ${{ vars.STREAM_CHAT_API_KEY }}
STREAM_API_SECRET: ${{ secrets.STREAM_CHAT_API_SECRET }}
STREAM_BASE_URL: ${{ vars.STREAM_CHAT_BASE_URL }}
run: |
uv run pytest -m "${{ inputs.marker }}" tests/ getstream/ \
--ignore=tests/test_video_examples.py \
--ignore=tests/test_video_integration.py
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# First, let's read the workflow file to see the full context
cat -n .github/workflows/run_tests.yml

Repository: GetStream/stream-py

Length of output: 4851


🏁 Script executed:

# Also check if there are other workflows that reference this one
rg -n "uses:.*run_tests" --type yaml -A 10

Repository: GetStream/stream-py

Length of output: 45


🏁 Script executed:

# Search for STREAM_CHAT_API_SECRET references in the repository
rg "STREAM_CHAT_API_SECRET" --type yaml

Repository: GetStream/stream-py

Length of output: 45


🏁 Script executed:

# Search for files that call the run_tests workflow
rg "run_tests" --type yaml

Repository: GetStream/stream-py

Length of output: 45


🏁 Script executed:

# List all workflow files to find callers
find .github/workflows -name "*.yml" -type f

Repository: GetStream/stream-py

Length of output: 149


🏁 Script executed:

# Read the ci.yml workflow
cat -n .github/workflows/ci.yml

Repository: GetStream/stream-py

Length of output: 570


🏁 Script executed:

# Read the release.yml workflow
cat -n .github/workflows/release.yml

Repository: GetStream/stream-py

Length of output: 1994


Declare secrets explicitly in the workflow_call schema.

The static analysis flag is not a false positive. While the workflow functions correctly via secrets: inherit from calling workflows (ci.yml and release.yml), the reusable workflow should explicitly declare the secrets it requires in the workflow_call schema for clarity and proper validation:

workflow_call:
  inputs:
    marker:
      description: 'pytest -m expression (e.g., `not integration` or `integration`)'
      required: true
      type: string
  secrets:
    STREAM_API_SECRET:
      required: false
    STREAM_CHAT_API_SECRET:
      required: false
    STREAM_CHAT_API_KEY:
      required: false
    STREAM_CHAT_BASE_URL:
      required: false
    STREAM_BASE_URL:
      required: false
    STREAM_API_KEY:
      required: false

The test isolation approach is correct — non-video tests override environment variables with Chat-specific credentials while video tests use job-level credentials.

🧰 Tools
🪛 actionlint (1.7.11)

[error] 98-98: property "stream_chat_api_secret" is not defined in object type {actions_runner_debug: string; actions_step_debug: string; github_token: string}

(expression)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/run_tests.yml around lines 95 - 103, Add explicit secret
declarations to the reusable workflow_call schema: update the workflow_call
block that defines inputs.marker to also list the required secrets
(STREAM_API_SECRET, STREAM_CHAT_API_SECRET, STREAM_CHAT_API_KEY,
STREAM_CHAT_BASE_URL, STREAM_BASE_URL, STREAM_API_KEY) under a secrets key with
required: false for each, so the static analyzer and callers (ci.yml,
release.yml) can validate and pass secrets properly while keeping the existing
env overrides in the "Run non-video tests" job.

Daksh and others added 2 commits March 2, 2026 13:53
Ignore all video/RTC test paths in non-video step (tests/rtc/,
test_video_openai, test_signaling, test_audio_stream_track, and
getstream/video doctests). Run them in the video step instead.
Also bump test_delete_channels timeout to 60s and fix error message.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
delete_channels task stays pending on this backend, so just assert
task_id is returned without polling. Also fix wait_for_task to break
on "failed" status (matching Go SDK behavior).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@tests/base.py`:
- Around line 36-40: The docstring and timeout error message need to reflect
that the helper now treats both "completed" and "failed" as terminal states;
update the docstring for the helper that polls task status (the function
checking response.data.status) to say it returns on "completed" or "failed"
rather than only "completed", and change the TimeoutError message (raised using
task_id and timeout_ms) to mention that the task did not complete or failed
within the timeout window (or did not reach a terminal state within timeout) so
the contract matches the code's terminal-state behavior.

In `@tests/test_chat_channel.py`:
- Around line 374-383: The docstring for test_delete_channels is misleading: it
claims the test "polls for completion" but the test only creates a channel and
asserts a task_id from client.chat.delete_channels; update the docstring of
test_delete_channels to describe the actual behavior (create a channel and
verify the delete_channels response contains a task_id) so it accurately
reflects the assertions involving response.data.task_id and the call to
client.chat.delete_channels(cids=[cid], hard_delete=True).
- Around line 33-40: The cleanup blocks swallow all exceptions (try:
client.chat.delete_channels(...) except Exception: pass), which hides API
failures and leaks resources; change each of these to catch only the expected
API/HTTP errors (e.g., the SDK's NotFound/ResourceAlreadyDeleted/ApiError class)
and handle them (log a warning) while letting unexpected exceptions propagate
(re-raise) so tests fail; replace the bare except in the
client.chat.delete_channels calls with a specific except <ExpectedSDKException>
as e: log/record the cleanup failure, and add a final generic except Exception:
raise to avoid hiding regressions—apply this change to every occurrence of
client.chat.delete_channels in this file (the blocks around the reported
ranges).
- Around line 634-637: The test creates temp files with
tempfile.NamedTemporaryFile (producing tmp_path) for uploading; change it to use
a small reusable fixture file from tests/assets/ instead: add or reuse an asset
under tests/assets/ (<=256KB), replace the tempfile.NamedTemporaryFile block and
references to tmp_path in the test (and the similar block at lines 663-666) to
open that asset (e.g., via open("tests/assets/your_asset.txt","rb")) so the test
reads the existing asset rather than generating a temp file at runtime.
- Around line 20-685: The file contains many standalone test functions that
should be grouped into pytest test classes for organization; create logical test
classes (e.g., TestChannelLifecycle, TestChannelMembers, TestChannelModeration,
TestChannelVisibilityAndPinning, TestChannelExportsAndTasks,
TestUploadsAndFiles, TestMessageCountAndUnread) and move the related functions
into those classes as methods (preserve each function name starting with test_
and keep the same fixture arguments), e.g., put test_create_channel,
test_create_channel_with_options, test_update_channel, test_delete_channel,
test_truncate_channel, test_truncate_channel_with_options,
test_freeze_unfreeze_channel into TestChannelLifecycle; group member-related
tests like test_add_members, test_add_members_hide_history, test_invite_members,
test_invites_accept_reject, test_query_members, test_add_moderators,
test_assign_roles, test_update_member_partial, test_add_members_with_roles into
TestChannelMembers; group visibility/pinning/muting/archive tests like
test_channel_hide_show, test_mute_unmute_channel, test_pin_channel,
test_archive_channel into TestChannelVisibilityAndPinning; place moderation and
deletion/existing tasks like test_ban_user_in_channel, test_delete_channels,
test_export_channel, test_export_channel_status into
TestChannelModerationAndTasks; group uploads into TestUploadsAndFiles
(test_upload_and_delete_file/image); and group message count/unread/thread tests
into TestMessageCountAndUnread; ensure you only wrap functions into classes (no
logic changes) so fixtures still inject correctly.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between f195524 and 3c0ac3d.

📒 Files selected for processing (3)
  • .github/workflows/run_tests.yml
  • tests/base.py
  • tests/test_chat_channel.py

tests/base.py Outdated
Comment on lines +36 to +40
if response.data.status in ("completed", "failed"):
return response
if (time.time() * 1000) - start_time > timeout_ms:
raise TimeoutError(
f"Task {task_id} did not complete within {timeout_ms} seconds"
f"Task {task_id} did not complete within {timeout_ms}ms"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Update helper contract docs to match terminal-state behavior.

Line 36 now returns on "failed" as well as "completed", but the docstring still says “completed or timeout.” Please align the docstring (and timeout raise description) with the new terminal-state behavior to avoid misuse.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/base.py` around lines 36 - 40, The docstring and timeout error message
need to reflect that the helper now treats both "completed" and "failed" as
terminal states; update the docstring for the helper that polls task status (the
function checking response.data.status) to say it returns on "completed" or
"failed" rather than only "completed", and change the TimeoutError message
(raised using task_id and timeout_ms) to mention that the task did not complete
or failed within the timeout window (or did not reach a terminal state within
timeout) so the contract matches the code's terminal-state behavior.

Comment on lines +20 to +685
def test_create_channel(client: Stream, random_users):
"""Create a channel without specifying an ID (distinct channel)."""
member_ids = [u.id for u in random_users]
channel = client.chat.channel("messaging", str(uuid.uuid4()))
response = channel.get_or_create(
data=ChannelInput(
created_by_id=member_ids[0],
members=[ChannelMemberRequest(user_id=uid) for uid in member_ids],
)
)
assert response.data.channel is not None
assert response.data.channel.type == "messaging"

# cleanup
try:
client.chat.delete_channels(
cids=[f"{response.data.channel.type}:{response.data.channel.id}"],
hard_delete=True,
)
except Exception:
pass


def test_create_channel_with_options(client: Stream, random_users):
"""Create a channel with hide_for_creator option."""
member_ids = [u.id for u in random_users]
channel = client.chat.channel("messaging", str(uuid.uuid4()))
response = channel.get_or_create(
hide_for_creator=True,
data=ChannelInput(
created_by_id=member_ids[0],
members=[ChannelMemberRequest(user_id=uid) for uid in member_ids],
),
)
assert response.data.channel is not None

try:
client.chat.delete_channels(
cids=[f"{response.data.channel.type}:{response.data.channel.id}"],
hard_delete=True,
)
except Exception:
pass


def test_update_channel(channel: Channel, random_user):
"""Update channel data with custom fields."""
response = channel.update(
data=ChannelInputRequest(custom={"motd": "one apple a day..."})
)
assert response.data.channel is not None
assert response.data.channel.custom.get("motd") == "one apple a day..."


def test_update_channel_partial(channel: Channel):
"""Partial update: set and unset fields."""
channel.update_channel_partial(set={"color": "blue", "age": 30})
response = channel.update_channel_partial(set={"color": "red"}, unset=["age"])
assert response.data.channel is not None
assert response.data.channel.custom.get("color") == "red"
assert "age" not in (response.data.channel.custom or {})


def test_delete_channel(client: Stream, random_user):
"""Delete a channel and verify deleted_at is set."""
channel_id = str(uuid.uuid4())
ch = client.chat.channel("messaging", channel_id)
ch.get_or_create(data=ChannelInput(created_by_id=random_user.id))
response = ch.delete()
assert response.data.channel is not None
assert response.data.channel.deleted_at is not None


def test_truncate_channel(channel: Channel, random_user):
"""Truncate a channel."""
channel.send_message(message=MessageRequest(text="hello", user_id=random_user.id))
response = channel.truncate()
assert response.data.channel is not None


def test_truncate_channel_with_options(channel: Channel, random_user):
"""Truncate a channel with skip_push and system message."""
channel.send_message(message=MessageRequest(text="hello", user_id=random_user.id))
response = channel.truncate(
skip_push=True,
message=MessageRequest(text="Truncating channel.", user_id=random_user.id),
)
assert response.data.channel is not None


def test_add_members(channel: Channel, random_users):
"""Add members to a channel."""
user_id = random_users[0].id
# Remove first to ensure clean state
channel.update(remove_members=[user_id])
response = channel.update(add_members=[ChannelMemberRequest(user_id=user_id)])
assert response.data.members is not None
member_ids = [m.user_id for m in response.data.members]
assert user_id in member_ids


def test_add_members_hide_history(channel: Channel, random_users):
"""Add members with hide_history option."""
user_id = random_users[0].id
channel.update(remove_members=[user_id])
response = channel.update(
add_members=[ChannelMemberRequest(user_id=user_id)],
hide_history=True,
)
assert response.data.members is not None
member_ids = [m.user_id for m in response.data.members]
assert user_id in member_ids


def test_invite_members(channel: Channel, random_users):
"""Invite members to a channel."""
user_id = random_users[0].id
channel.update(remove_members=[user_id])
response = channel.update(invites=[ChannelMemberRequest(user_id=user_id)])
assert response.data.members is not None
member_ids = [m.user_id for m in response.data.members]
assert user_id in member_ids


def test_add_moderators(channel: Channel, random_user):
"""Add and demote moderators."""
response = channel.update(
add_members=[ChannelMemberRequest(user_id=random_user.id)]
)
response = channel.update(add_moderators=[random_user.id])
mod = [m for m in response.data.members if m.user_id == random_user.id]
assert len(mod) == 1
assert mod[0].is_moderator is True

response = channel.update(demote_moderators=[random_user.id])
mod = [m for m in response.data.members if m.user_id == random_user.id]
assert len(mod) == 1
assert mod[0].is_moderator is not True


def test_assign_roles(channel: Channel, random_user):
"""Assign roles to channel members."""
channel.update(
add_members=[
ChannelMemberRequest(
user_id=random_user.id, channel_role="channel_moderator"
)
]
)
mod = None
resp = channel.update(
assign_roles=[
ChannelMemberRequest(user_id=random_user.id, channel_role="channel_member")
]
)
for m in resp.data.members:
if m.user_id == random_user.id:
mod = m
assert mod is not None
assert mod.channel_role == "channel_member"


def test_mark_read(channel: Channel, random_user):
"""Mark a channel as read."""
channel.update(add_members=[ChannelMemberRequest(user_id=random_user.id)])
response = channel.mark_read(user_id=random_user.id)
assert response.data.event is not None
assert response.data.event.type == "message.read"


def test_mark_unread(channel: Channel, random_user):
"""Mark a channel as unread from a specific message."""
msg_response = channel.send_message(
message=MessageRequest(text="helloworld", user_id=random_user.id)
)
msg_id = msg_response.data.message.id
response = channel.mark_unread(user_id=random_user.id, message_id=msg_id)
assert response is not None


def test_channel_hide_show(client: Stream, channel: Channel, random_users):
"""Hide and show a channel for a user."""
user_id = random_users[0].id
channel.update(
add_members=[
ChannelMemberRequest(user_id=uid) for uid in [u.id for u in random_users]
]
)

# verify channel is visible
response = client.chat.query_channels(
filter_conditions={"id": channel.channel_id}, user_id=user_id
)
assert len(response.data.channels) == 1

# hide
channel.hide(user_id=user_id)
response = client.chat.query_channels(
filter_conditions={"id": channel.channel_id}, user_id=user_id
)
assert len(response.data.channels) == 0

# show
channel.show(user_id=user_id)
response = client.chat.query_channels(
filter_conditions={"id": channel.channel_id}, user_id=user_id
)
assert len(response.data.channels) == 1


def test_invites_accept_reject(client: Stream, random_users):
"""Accept and reject channel invites."""
john = random_users[0].id
ringo = random_users[1].id
eric = random_users[2].id

channel_id = "beatles-" + str(uuid.uuid4())
ch = client.chat.channel("team", channel_id)
ch.get_or_create(
data=ChannelInput(
created_by_id=john,
members=[ChannelMemberRequest(user_id=uid) for uid in [john, ringo, eric]],
invites=[ChannelMemberRequest(user_id=uid) for uid in [ringo, eric]],
)
)

# accept invite
accept = ch.update(accept_invite=True, user_id=ringo)
for m in accept.data.members:
if m.user_id == ringo:
assert m.invited is True
assert m.invite_accepted_at is not None

# reject invite
reject = ch.update(reject_invite=True, user_id=eric)
for m in reject.data.members:
if m.user_id == eric:
assert m.invited is True
assert m.invite_rejected_at is not None

try:
client.chat.delete_channels(cids=[f"team:{channel_id}"], hard_delete=True)
except Exception:
pass


def test_query_members(client: Stream, channel: Channel):
"""Query channel members with autocomplete filter."""
rand = str(uuid.uuid4())[:8]
user_ids = [f"{n}-{rand}" for n in ["paul", "george", "john", "jessica", "john2"]]
client.update_users(users={uid: UserRequest(id=uid, name=uid) for uid in user_ids})
for uid in user_ids:
channel.update(add_members=[ChannelMemberRequest(user_id=uid)])

response = client.chat.query_members(
payload=QueryMembersPayload(
type=channel.channel_type,
id=channel.channel_id,
filter_conditions={"name": {"$autocomplete": "j"}},
sort=[SortParamRequest(field="created_at", direction=1)],
offset=1,
limit=10,
)
)
assert response.data.members is not None
assert len(response.data.members) == 2

try:
client.delete_users(
user_ids=user_ids, user="hard", conversations="hard", messages="hard"
)
except Exception:
pass


def test_mute_unmute_channel(client: Stream, channel: Channel, random_users):
"""Mute and unmute a channel."""
user_id = random_users[0].id
channel.update(add_members=[ChannelMemberRequest(user_id=user_id)])
cid = f"{channel.channel_type}:{channel.channel_id}"

response = client.chat.mute_channel(
user_id=user_id, channel_cids=[cid], expiration=30000
)
assert response.data.channel_mute is not None
assert response.data.channel_mute.expires is not None

# verify muted channel appears in query
response = client.chat.query_channels(
filter_conditions={"muted": True, "cid": cid}, user_id=user_id
)
assert len(response.data.channels) == 1

# unmute
client.chat.unmute_channel(user_id=user_id, channel_cids=[cid])
response = client.chat.query_channels(
filter_conditions={"muted": True, "cid": cid}, user_id=user_id
)
assert len(response.data.channels) == 0


def test_export_channel(client: Stream, channel: Channel, random_users):
"""Export a channel and poll the task until complete."""
channel.send_message(
message=MessageRequest(text="Hey Joni", user_id=random_users[0].id)
)
cid = f"{channel.channel_type}:{channel.channel_id}"
response = client.chat.export_channels(channels=[ChannelExport(cid=cid)])
task_id = response.data.task_id
assert task_id is not None and task_id != ""

task_response = wait_for_task(client, task_id, timeout_ms=30000)
assert task_response.data.status == "completed"


def test_update_member_partial(channel: Channel, random_users):
"""Partial update of a channel member's custom fields."""
user_id = random_users[0].id
channel.update(add_members=[ChannelMemberRequest(user_id=user_id)])

response = channel.update_member_partial(user_id=user_id, set={"hat": "blue"})
assert response.data.channel_member is not None
assert response.data.channel_member.custom.get("hat") == "blue"

response = channel.update_member_partial(
user_id=user_id, set={"color": "red"}, unset=["hat"]
)
assert response.data.channel_member.custom.get("color") == "red"
assert "hat" not in (response.data.channel_member.custom or {})


def test_query_channels(client: Stream, random_users):
"""Query channels by member filter."""
user_id = random_users[0].id
channel_id = str(uuid.uuid4())
ch = client.chat.channel("messaging", channel_id)
ch.get_or_create(
data=ChannelInput(
created_by_id=user_id,
members=[ChannelMemberRequest(user_id=user_id)],
)
)

response = client.chat.query_channels(
filter_conditions={"members": {"$in": [user_id]}}
)
assert len(response.data.channels) >= 1

try:
client.chat.delete_channels(cids=[f"messaging:{channel_id}"], hard_delete=True)
except Exception:
pass


def test_delete_channels(client: Stream, random_user):
"""Delete channels via async task and poll for completion."""
channel_id = str(uuid.uuid4())
ch = client.chat.channel("messaging", channel_id)
ch.get_or_create(data=ChannelInput(created_by_id=random_user.id))

cid = f"messaging:{channel_id}"
response = client.chat.delete_channels(cids=[cid], hard_delete=True)
assert response.data.task_id is not None


def test_filter_tags(channel: Channel, random_user):
"""Add and remove filter tags on a channel."""
response = channel.update(add_filter_tags=["vip"])
assert response.data.channel is not None

response = channel.update(remove_filter_tags=["vip"])
assert response.data.channel is not None


def test_pin_channel(client: Stream, channel: Channel, random_users):
"""Pin and unpin a channel for a user."""
user_id = random_users[0].id
channel.update(add_members=[ChannelMemberRequest(user_id=user_id)])
cid = f"{channel.channel_type}:{channel.channel_id}"

# Pin the channel
response = channel.update_member_partial(user_id=user_id, set={"pinned": True})
assert response is not None

# Query for pinned channels
response = client.chat.query_channels(
filter_conditions={"pinned": True, "cid": cid}, user_id=user_id
)
assert len(response.data.channels) == 1
assert response.data.channels[0].channel.cid == cid

# Unpin the channel
response = channel.update_member_partial(user_id=user_id, set={"pinned": False})
assert response is not None

# Query for unpinned channels
response = client.chat.query_channels(
filter_conditions={"pinned": False, "cid": cid}, user_id=user_id
)
assert len(response.data.channels) == 1


def test_archive_channel(client: Stream, channel: Channel, random_users):
"""Archive and unarchive a channel for a user."""
user_id = random_users[0].id
channel.update(add_members=[ChannelMemberRequest(user_id=user_id)])
cid = f"{channel.channel_type}:{channel.channel_id}"

# Archive the channel
response = channel.update_member_partial(user_id=user_id, set={"archived": True})
assert response is not None

# Query for archived channels
response = client.chat.query_channels(
filter_conditions={"archived": True, "cid": cid}, user_id=user_id
)
assert len(response.data.channels) == 1
assert response.data.channels[0].channel.cid == cid

# Unarchive the channel
response = channel.update_member_partial(user_id=user_id, set={"archived": False})
assert response is not None

# Query for unarchived channels
response = client.chat.query_channels(
filter_conditions={"archived": False, "cid": cid}, user_id=user_id
)
assert len(response.data.channels) == 1


def test_export_channel_status(client: Stream):
"""Test error handling for export channel status with invalid task ID."""
import pytest
from getstream.base import StreamAPIException

# Invalid task ID should raise an error
with pytest.raises(StreamAPIException):
client.get_task(id=str(uuid.uuid4()))


def test_ban_user_in_channel(
client: Stream, channel: Channel, random_user, server_user
):
"""Ban and unban a user at channel level."""
channel.update(
add_members=[
ChannelMemberRequest(user_id=uid)
for uid in [random_user.id, server_user.id]
]
)
cid = f"{channel.channel_type}:{channel.channel_id}"

client.moderation.ban(
target_user_id=random_user.id,
banned_by_id=server_user.id,
channel_cid=cid,
)
client.moderation.ban(
target_user_id=random_user.id,
banned_by_id=server_user.id,
channel_cid=cid,
timeout=3600,
reason="offensive language is not allowed here",
)
client.moderation.unban(
target_user_id=random_user.id,
channel_cid=cid,
)


def test_create_distinct_channel(client: Stream, random_users):
"""Create a distinct channel and verify idempotency."""
member_ids = [u.id for u in random_users[:2]]
members = [ChannelMemberRequest(user_id=uid) for uid in member_ids]

response = client.chat.get_or_create_distinct_channel(
type="messaging",
data=ChannelInput(created_by_id=member_ids[0], members=members),
)
assert response.data.channel is not None
first_cid = response.data.channel.cid

# calling again with same members should return same channel
response2 = client.chat.get_or_create_distinct_channel(
type="messaging",
data=ChannelInput(created_by_id=member_ids[0], members=members),
)
assert response2.data.channel.cid == first_cid

try:
client.chat.delete_channels(cids=[first_cid], hard_delete=True)
except Exception:
pass


def test_freeze_unfreeze_channel(channel: Channel):
"""Freeze and unfreeze a channel."""
response = channel.update_channel_partial(set={"frozen": True})
assert response.data.channel.frozen is True

response = channel.update_channel_partial(set={"frozen": False})
assert response.data.channel.frozen is False


def test_mark_unread_with_thread(channel: Channel, random_user):
"""Mark unread from a specific thread."""
channel.update(add_members=[ChannelMemberRequest(user_id=random_user.id)])
parent = channel.send_message(
message=MessageRequest(text="Parent for unread thread", user_id=random_user.id)
)
parent_id = parent.data.message.id

channel.send_message(
message=MessageRequest(
text="Reply in thread",
user_id=random_user.id,
parent_id=parent_id,
)
)

response = channel.mark_unread(
user_id=random_user.id,
thread_id=parent_id,
)
assert response is not None


def test_add_members_with_roles(client: Stream, channel: Channel):
"""Add members with specific channel roles."""
rand = str(uuid.uuid4())[:8]
mod_id = f"mod-{rand}"
member_id = f"member-{rand}"
user_ids = [mod_id, member_id]
client.update_users(users={uid: UserRequest(id=uid, name=uid) for uid in user_ids})

channel.update(
add_members=[
ChannelMemberRequest(user_id=mod_id, channel_role="channel_moderator"),
ChannelMemberRequest(user_id=member_id, channel_role="channel_member"),
]
)

members_resp = client.chat.query_members(
payload=QueryMembersPayload(
type=channel.channel_type,
id=channel.channel_id,
filter_conditions={"id": {"$in": user_ids}},
)
)
role_map = {m.user_id: m.channel_role for m in members_resp.data.members}
assert role_map[mod_id] == "channel_moderator"
assert role_map[member_id] == "channel_member"

try:
client.delete_users(
user_ids=user_ids, user="hard", conversations="hard", messages="hard"
)
except Exception:
pass


def test_message_count(client: Stream, channel: Channel, random_user):
"""Verify message count on a channel."""
channel.send_message(
message=MessageRequest(text="hello world", user_id=random_user.id)
)

q_resp = client.chat.query_channels(
filter_conditions={"cid": f"{channel.channel_type}:{channel.channel_id}"},
user_id=random_user.id,
)
assert len(q_resp.data.channels) == 1
ch = q_resp.data.channels[0].channel
if ch.message_count is not None:
assert ch.message_count >= 1


def test_message_count_disabled(client: Stream, channel: Channel, random_user):
"""Verify message count is None when count_messages is disabled."""
channel.update_channel_partial(set={"config_overrides": {"count_messages": False}})

channel.send_message(
message=MessageRequest(text="hello world", user_id=random_user.id)
)

q_resp = client.chat.query_channels(
filter_conditions={"cid": f"{channel.channel_type}:{channel.channel_id}"},
user_id=random_user.id,
)
assert len(q_resp.data.channels) == 1
assert q_resp.data.channels[0].channel.message_count is None


def test_mark_unread_with_timestamp(channel: Channel, random_user):
"""Mark unread using a message timestamp."""
channel.update(add_members=[ChannelMemberRequest(user_id=random_user.id)])
send_resp = channel.send_message(
message=MessageRequest(
text="test message for timestamp", user_id=random_user.id
)
)
ts = send_resp.data.message.created_at

response = channel.mark_unread(
user_id=random_user.id,
message_timestamp=ts,
)
assert response is not None


def test_upload_and_delete_file(channel: Channel, random_user):
"""Upload and delete a file."""
import os

with tempfile.NamedTemporaryFile(suffix=".txt", delete=False) as f:
f.write(b"hello world test file content")
f.flush()
tmp_path = f.name

try:
upload_resp = channel.upload_channel_file(
file=tmp_path,
user=OnlyUserID(id=random_user.id),
)
assert upload_resp.data.file is not None
file_url = upload_resp.data.file
assert "http" in file_url

channel.delete_channel_file(url=file_url)
except Exception as e:
if "multipart" in str(e).lower():
import pytest

pytest.skip("File upload requires multipart/form-data support")
raise
finally:
os.unlink(tmp_path)


def test_upload_and_delete_image(channel: Channel, random_user):
"""Upload and delete an image."""
import os

with tempfile.NamedTemporaryFile(suffix=".jpg", delete=False) as f:
f.write(b"fake-jpg-image-data-for-testing")
f.flush()
tmp_path = f.name

try:
upload_resp = channel.upload_channel_image(
file=tmp_path,
user=OnlyUserID(id=random_user.id),
)
assert upload_resp.data.file is not None
image_url = upload_resp.data.file
assert "http" in image_url

channel.delete_channel_image(url=image_url)
except Exception as e:
if "multipart" in str(e).lower():
import pytest

pytest.skip("Image upload requires multipart/form-data support")
raise
finally:
os.unlink(tmp_path)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Group related channel tests into test classes.

This module is fully function-based; please organize related tests into class groups for maintainability and consistency with repo test structure.

As per coding guidelines: “Keep tests well organized and use test classes to group similar tests”.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/test_chat_channel.py` around lines 20 - 685, The file contains many
standalone test functions that should be grouped into pytest test classes for
organization; create logical test classes (e.g., TestChannelLifecycle,
TestChannelMembers, TestChannelModeration, TestChannelVisibilityAndPinning,
TestChannelExportsAndTasks, TestUploadsAndFiles, TestMessageCountAndUnread) and
move the related functions into those classes as methods (preserve each function
name starting with test_ and keep the same fixture arguments), e.g., put
test_create_channel, test_create_channel_with_options, test_update_channel,
test_delete_channel, test_truncate_channel, test_truncate_channel_with_options,
test_freeze_unfreeze_channel into TestChannelLifecycle; group member-related
tests like test_add_members, test_add_members_hide_history, test_invite_members,
test_invites_accept_reject, test_query_members, test_add_moderators,
test_assign_roles, test_update_member_partial, test_add_members_with_roles into
TestChannelMembers; group visibility/pinning/muting/archive tests like
test_channel_hide_show, test_mute_unmute_channel, test_pin_channel,
test_archive_channel into TestChannelVisibilityAndPinning; place moderation and
deletion/existing tasks like test_ban_user_in_channel, test_delete_channels,
test_export_channel, test_export_channel_status into
TestChannelModerationAndTasks; group uploads into TestUploadsAndFiles
(test_upload_and_delete_file/image); and group message count/unread/thread tests
into TestMessageCountAndUnread; ensure you only wrap functions into classes (no
logic changes) so fixtures still inject correctly.

Comment on lines +33 to +40
# cleanup
try:
client.chat.delete_channels(
cids=[f"{response.data.channel.type}:{response.data.channel.id}"],
hard_delete=True,
)
except Exception:
pass
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Avoid except Exception: pass in cleanup paths.

These blocks currently hide cleanup/API regressions and can leak test resources, which hurts suite reliability. Catch only expected cleanup failures (or fail the test on unexpected exceptions).

Also applies to: 56-62, 260-263, 287-292, 368-371, 509-512, 573-578

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/test_chat_channel.py` around lines 33 - 40, The cleanup blocks swallow
all exceptions (try: client.chat.delete_channels(...) except Exception: pass),
which hides API failures and leaks resources; change each of these to catch only
the expected API/HTTP errors (e.g., the SDK's
NotFound/ResourceAlreadyDeleted/ApiError class) and handle them (log a warning)
while letting unexpected exceptions propagate (re-raise) so tests fail; replace
the bare except in the client.chat.delete_channels calls with a specific except
<ExpectedSDKException> as e: log/record the cleanup failure, and add a final
generic except Exception: raise to avoid hiding regressions—apply this change to
every occurrence of client.chat.delete_channels in this file (the blocks around
the reported ranges).

Comment on lines +374 to +383
def test_delete_channels(client: Stream, random_user):
"""Delete channels via async task and poll for completion."""
channel_id = str(uuid.uuid4())
ch = client.chat.channel("messaging", channel_id)
ch.get_or_create(data=ChannelInput(created_by_id=random_user.id))

cid = f"messaging:{channel_id}"
response = client.chat.delete_channels(cids=[cid], hard_delete=True)
assert response.data.task_id is not None

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Docstring and behavior diverged in test_delete_channels.

Line 375 says the test polls for completion, but the body now only verifies task_id existence. Please update the docstring to reflect actual behavior.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/test_chat_channel.py` around lines 374 - 383, The docstring for
test_delete_channels is misleading: it claims the test "polls for completion"
but the test only creates a channel and asserts a task_id from
client.chat.delete_channels; update the docstring of test_delete_channels to
describe the actual behavior (create a channel and verify the delete_channels
response contains a task_id) so it accurately reflects the assertions involving
response.data.task_id and the call to client.chat.delete_channels(cids=[cid],
hard_delete=True).

Comment on lines +634 to +637
with tempfile.NamedTemporaryFile(suffix=".txt", delete=False) as f:
f.write(b"hello world test file content")
f.flush()
tmp_path = f.name
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Use tests/assets/ files for upload tests instead of generating temp files.

These tests currently create ad-hoc files at runtime. Please switch to reusable assets under tests/assets/ (within size limits) per repository test conventions.

As per coding guidelines: “Tests that rely on file assets should use files from tests/assets/ folder; add new files there and reuse existing ones when possible. Do not use files larger than 256 kilobytes”.

Also applies to: 663-666

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/test_chat_channel.py` around lines 634 - 637, The test creates temp
files with tempfile.NamedTemporaryFile (producing tmp_path) for uploading;
change it to use a small reusable fixture file from tests/assets/ instead: add
or reuse an asset under tests/assets/ (<=256KB), replace the
tempfile.NamedTemporaryFile block and references to tmp_path in the test (and
the similar block at lines 663-666) to open that asset (e.g., via
open("tests/assets/your_asset.txt","rb")) so the test reads the existing asset
rather than generating a temp file at runtime.

Daksh and others added 5 commits March 2, 2026 14:15
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Narrow `except Exception` to `except StreamAPIException` in cleanup blocks
- Fix stale docstring on test_delete_channels
- Replace runtime tempfile creation with static test assets
- Group 36 test functions into 5 logical classes

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…misc

- Remove duplicate ChannelMemberRequest import in test_chat_message.py
- Restore team channel type commands after mutation in test_update_channel_type
- Replace fixed time.sleep with bounded polling in test_permissions_roles

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
GetChannelTypeResponse.commands returns List[Command] objects, but
update_channel_type expects List[str]. Extract .name from each command.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add draft tests (create/get/delete/thread/query), enhance channel tests
(members $in query, filter tags, hide/show hidden filter, invite error
handling), enhance message tests (replies pagination, reactions offset),
and add user custom field filter+sort test.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Daksh and others added 3 commits March 3, 2026 11:08
Add draft tests (create/get/delete/thread/query), enhance channel tests
(members $in query, filter tags, hide/show hidden filter, invite error
for non-member), enhance message tests (replies pagination with limit,
reactions offset), and add user custom field filter+sort test.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Previously wait_for_task silently returned on "failed" status, treating
it the same as "completed". Now it raises RuntimeError so callers don't
accidentally accept failed tasks.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Matches stream-chat-python which skips the equivalent test. The test
leaks custom roles on failure, hitting the 25-role app limit.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Copy link

@cursor cursor bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 1 potential issue.

Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.

Add getstream/video to the video test step so doctests inside that
directory are collected again.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@@ -73,6 +73,9 @@ jobs:
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suggest we make it clear for reading each step exactly which credentials it runs against, and remove these.

      - name: Run non-video tests
        env:
          STREAM_API_KEY: ${{ vars.STREAM_CHAT_API_KEY }}
          STREAM_API_SECRET: ${{ secrets.STREAM_CHAT_API_SECRET }}
          STREAM_BASE_URL: ${{ vars.STREAM_CHAT_BASE_URL }}
        run: |
          uv run pytest -m "${{ inputs.marker }}" tests/ getstream/ \
...
      - name: Run video tests
        env:
          STREAM_API_KEY: ${{ vars.STREAM_API_KEY }}
          STREAM_API_SECRET: ${{ secrets.STREAM_API_SECRET }}
          STREAM_BASE_URL: ${{ vars.STREAM_BASE_URL }}
        run: |
          uv run pytest -m "${{ inputs.marker }}" \
...

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The step level env override is needed because the SDK's Settings class reads STREAM_API_KEY via pydantic BaseSettings, both chat and video tests use the same env var name, just with different credentials. The remapping at the step level is the cleanest way to handle this without modifying the auto generated SDK or adding env switching logic in test fixtures.

I've removed the job level env block and added explicit env to both steps so it's clear which credentials each step uses.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants