You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Running test tests/functional/ui/test_odf_storage_consumption_trend.py::TestConsumptionTrendUI::test_consumption_trend_with_prometheus_failures on the s390x arch does produce a failure:
20:35:43 - MainThread - ocs_ci.ocs.ui.base_ui - INFO - UI logs directory class /opt/clusters/m1322001/ocs-ci-logs/ui_logs_dir_1740598417
20:35:43 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get clusterversion -n openshift-storage -o yaml
20:35:45 - MainThread - ocs_ci.ocs.ui.base_ui - INFO - page loaded: https://console-openshift-console.apps.m1322001.lnxero1.boe/odf/system/ns/openshift-storage/ocs.openshift.io~v1~storagecluster/ocs-storagecluster-storagesystem/overview/block-file
20:35:45 - MainThread - tests.functional.ui.test_odf_storage_consumption_trend - INFO - Get the value of 'Estimated days until full' from UI
20:35:45 - MainThread - ocs_ci.ocs.ui.helpers_ui - WARNING - Dashboard is not yet ready yet after osd resize
20:35:45 - MainThread - ocs_ci.utility.utils - INFO - Going to sleep for 300 seconds before next iteration
20:40:45 - MainThread - ocs_ci.ocs.ui.helpers_ui - WARNING - Dashboard is not yet ready yet after osd resize
20:40:45 - MainThread - ocs_ci.utility.utils - INFO - Going to sleep for 300 seconds before next iteration
20:45:45 - MainThread - ocs_ci.ocs.ui.helpers_ui - WARNING - Dashboard is not yet ready yet after osd resize
20:45:45 - MainThread - ocs_ci.utility.utils - INFO - Going to sleep for 300 seconds before next iteration
20:50:45 - MainThread - ocs_ci.ocs.ui.helpers_ui - WARNING - Dashboard is not yet ready yet after osd resize
20:50:45 - MainThread - ocs_ci.utility.utils - INFO - Going to sleep for 300 seconds before next iteration
20:55:45 - MainThread - ocs_ci.ocs.ui.helpers_ui - WARNING - Dashboard is not yet ready yet after osd resize
20:55:45 - MainThread - ocs_ci.utility.utils - INFO - Going to sleep for 300 seconds before next iteration
21:00:46 - MainThread - ocs_ci.ocs.ui.helpers_ui - WARNING - Dashboard is not yet ready yet after osd resize
21:00:46 - MainThread - ocs_ci.utility.utils - INFO - Going to sleep for 300 seconds before next iteration
21:05:46 - MainThread - ocs_ci.ocs.ui.helpers_ui - WARNING - Dashboard is not yet ready yet after osd resize
21:05:46 - MainThread - ocs_ci.utility.utils - INFO - Going to sleep for 300 seconds before next iteration
21:10:46 - MainThread - ocs_ci.ocs.ui.helpers_ui - WARNING - Dashboard is not yet ready yet after osd resize
21:10:46 - MainThread - ocs_ci.utility.utils - INFO - Going to sleep for 300 seconds before next iteration
21:15:46 - MainThread - ocs_ci.ocs.ui.helpers_ui - WARNING - Dashboard is not yet ready yet after osd resize
21:15:46 - MainThread - ocs_ci.utility.utils - INFO - Going to sleep for 300 seconds before next iteration
21:20:46 - MainThread - ocs_ci.ocs.ui.helpers_ui - WARNING - Dashboard is not yet ready yet after osd resize
21:20:46 - MainThread - ocs_ci.utility.utils - INFO - Going to sleep for 300 seconds before next iteration
21:25:47 - MainThread - ocs_ci.framework.pytest_customization.ocscilib - INFO - Adjusted timeout for MG is 3600 seconds
21:25:47 - ThreadPoolExecutor-0_0 - ocs_ci.ocs.utils - INFO - RUNNING IN CTX: m1322001 RUNID: = 1740598417
21:25:47 - ThreadPoolExecutor-0_0 - ocs_ci.ocs.utils - INFO - Must gather image: quay.io/rhceph-dev/ocs-must-gather:latest-4.18 will be used.
21:25:47 - ThreadPoolExecutor-0_0 - ocs_ci.ocs.utils - INFO - OCS logs will be placed in location /opt/clusters/m1322001/ocs-ci-logs/failed_testcase_ocs_logs_1740598417/test_consumption_trend_with_prometheus_failures_ocs_logs/m1322001/ocs_must_gather
21:25:47 - ThreadPoolExecutor-0_0 - ocs_ci.ocs.utils - INFO - Must gather std error log will be placed in: /opt/clusters/m1322001/ocs-ci-logs/failed_testcase_ocs_logs_1740598417/test_consumption_trend_with_prometheus_failures_ocs_logs/m1322001/ocs_must_gather/mg_output_1740601547.32652.log
21:25:47 - ThreadPoolExecutor-0_0 - ocs_ci.utility.utils - INFO - Executing command: oc --kubeconfig /opt/clusters/m1322001/ocp4-workdir/auth/kubeconfig adm must-gather --image=quay.io/rhceph-dev/ocs-must-gather:latest-4.18 --dest-dir=/opt/clusters/m1322001/ocs-ci-logs/failed_testcase_ocs_logs_1740598417/test_consumption_trend_with_prometheus_failures_ocs_logs/m132
21:30:01 - MainThread - ocs_ci.ocs.utils - INFO - None
21:30:01 - MainThread - ocs_ci.framework.pytest_customization.reports - INFO - duration reported by tests/functional/ui/test_odf_storage_consumption_trend.py::TestConsumptionTrendUI::test_consumption_trend_with_prometheus_failures immediately after test execution: 3040.42
FAILED
____ TestConsumptionTrendUI.test_consumption_trend_with_prometheus_failures ____
self = <tests.functional.ui.test_odf_storage_consumption_trend.TestConsumptionTrendUI object at 0x7ff42ed14a90>
setup_ui_class = <selenium.webdriver.chrome.webdriver.WebDriver (session="4bbb41e450eba6f0bb82caca42dca6ac")>
@polarion_id("OCS-6262")
def test_consumption_trend_with_prometheus_failures(self, setup_ui_class):
"""
Fail prometheus and verify the Consumption trend in the ODF dashboard to make sure
‘Estimated days until full’ and 'Average' reflects accurate value.
1. Fail the prometheus pods by deleting them. New pods will be created automatically.
2. When the new prometheus pods up, Consumption trend should be displayed in the dashboard.
3. Check the values for ‘Estimated days until full’ and ‘Average of storage consumption per day’
4. Should show close to the values before deleting the prometheus pod
"""
block_and_file_page = (
PageNavigator()
.nav_odf_default_page()
.nav_storage_systems_tab()
.nav_storagecluster_storagesystem_details()
.nav_block_and_file()
)
logger.info("Get the value of 'Estimated days until full' from UI")
> est_days_before = block_and_file_page.get_est_days_from_ui()
tests/functional/ui/test_odf_storage_consumption_trend.py:200:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
ocs_ci/ocs/ui/page_objects/block_and_file.py:243: in get_est_days_from_ui
collected_tpl_of_days_and_avg = self.odf_storagesystems_consumption_trend()
ocs_ci/ocs/ui/page_objects/block_and_file.py:218: in odf_storagesystems_consumption_trend
for tpl_of_days_and_avg in TimeoutSampler(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <ocs_ci.utility.utils.TimeoutSampler object at 0x7ff42fc6ccd0>
def __iter__(self):
if self.start_time is None:
self.start_time = time.time()
while True:
self.last_sample_time = time.time()
if self.timeout <= (self.last_sample_time - self.start_time):
> raise self.timeout_exc_cls(*self.timeout_exc_args)
E ocs_ci.ocs.exceptions.TimeoutExpiredError: Timed out after 3000s running get_estimated_days_from_consumption_trend()
ocs_ci/utility/utils.py:1491: TimeoutExpiredError
------------------------------ live log teardown -------------------------------
The text was updated successfully, but these errors were encountered:
Running test
tests/functional/ui/test_odf_storage_consumption_trend.py::TestConsumptionTrendUI::test_consumption_trend_with_prometheus_failures
on the s390x arch does produce a failure:The text was updated successfully, but these errors were encountered: