-
Notifications
You must be signed in to change notification settings - Fork 335
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
test_change_kernel and test_qsvr fail in clean stable 0.7 environment #726
Comments
I can confirm this is happening, we are looking into resolving these test fails. Environment
|
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
@adekusar-drl @woodsp-ibm have you had experience with these test fails? I ran these on the stable 0.7 and they still failed. |
This comment was marked as off-topic.
This comment was marked as off-topic.
@oscar-wallis In CI, on ubuntu linux, mac and windows these pass. Locally with a different linux these pass for me. I have not ever been able to reproduce these. It seems you have though - maybe the test needs to be relaxed a bit - rounded to 3 decimal places it would be 0.384 which seems it would span all cases. I am not sure why there is the difference - maybe some precision difference in some native library that gets used. |
@woodsp-ibm that's strange, I am running these tests on an M2 Mac chip and getting these issues. If we relax the tests don't we risk things passing the tests that really shouldn't? |
Github actions only recently introduced M1 chip capability and we do not have any action that tests there yet let alone M2.
Yes, relaxing the test condition could be a risk - but here the exact same code on a slightly different platform it seems is failing the test, but is presumably working, unless you can see otherwise. If the code is assumed to be working - I guess it seems to be for you right aside from the small difference either we loosen the test a little to accommodate the variance across platforms or somehow make the result platform dependent. I have no idea at present exactly what are the characteristics are that cause the different result, but since your test on M2 has the same values as he original post done with Arch Linux that could be using an M1/M2 chip too. We had something similar in optimization where a test was failing for someone locally who was using a Mac M1. It was observed at the time it might be nice to have that tested by CI - as its just become available I created this issue there qiskit-community/qiskit-optimization#593. In searching I can find posts related to this precision such as https://stackoverflow.com/questions/71441137/np-float32-floating-point-differences-between-intel-macbook-and-m1 |
No, I'm not running on a M1/M2 chip. Arch Linux only supports x86-64 at the moment. Before opening the issue I run the tests on two different devices, one with an Intel i5-1135G7 and a second one on a AMD Ryzen 9 7900X3D. Both fail the exact same tests I reported. |
And also notice that one of the tests is not failing because of a precision issue at all, the process crashes completely. |
@iyanmv Thanks for the info. I guess there is some other aspect of the environment then - here CI runs and passes these test on the latest versions of ubuntu, mac and windows VMs, across a range of Python versions. as provisioned via github actions (these latest versions change over time). The tests pass here, and for others locally. The two tests that are different are QSVR where QSVR is just a simple sub-class of scikit-learn SVR taking the kernel that is built out. Perhaps the kernel is different or there is some difference in scikit learn. As @oscar-wallis can reproduce maybe we can investigate further that aspect. As to the system crash yours would be the first report I have ever seen in this regard. |
I will try to investigate a little bit more the crash issue in the next few days and also check the CI pipeline you are using. I will comment if I figure something out. |
Environment
Investigation UpdateHi @iyanmv, @woodsp-ibm and I went and tested the test failing errors on my device where I can replicate the failures. We found we could pass the tests if we set def test_qsvr(self):
"""Test QSVR"""
qkernel = FidelityQuantumKernel(feature_map=self.feature_map, enforce_psd=False)
qsvr = QSVR(quantum_kernel=qkernel)
qsvr.fit(self.sample_train, self.label_train)
score = qsvr.score(self.sample_test, self.label_test)
self.assertAlmostEqual(score, 0.38359, places=4)
def test_change_kernel(self):
"""Test QSVR with QuantumKernel later"""
qkernel = FidelityQuantumKernel(feature_map=self.feature_map, enforce_psd=False)
qsvr = QSVR()
qsvr.quantum_kernel = qkernel
qsvr.fit(self.sample_train, self.label_train)
score = qsvr.score(self.sample_test, self.label_test)
self.assertAlmostEqual(score, 0.38359, places=4)
There is a follow on question of why def test_qsvr(self):
"""Test QSVR"""
from qiskit.algorithms.state_fidelities import ComputeUncompute
from qiskit_aer.primitives import Sampler
qkernel = FidelityQuantumKernel(
fidelity=ComputeUncompute(sampler=Sampler(run_options={"shots": None})),
feature_map=self.feature_map,
enforce_psd=True,
)
qsvr = QSVR(quantum_kernel=qkernel)
qsvr.fit(self.sample_train, self.label_train)
score = qsvr.score(self.sample_test, self.label_test)
self.assertAlmostEqual(score, 0.38359, places=4) This points to Action Taken
P.S. I really liked your issue format and copied it here, I'll probably continue to use it so thanks! |
We haven't been able to replicate the crash though, so I don't have anything to add to that unfortunately. |
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
@oscar-wallis Thanks for the detailed analysis! I still didn't have time to look to this in more detail. I think I will wait for the next release and I will run the tests again with qiskit 1.0.1. |
@iyanmv If you were using Qiskit 1.0 for these tests when you were getting both the test failures I was experiencing but with the additional test crash, the issue could be with how you have installed Qiskit 1.0. As mentioned in the Qiskit 1.0 release notes, to install Qiskit 1.0 you can't simply |
I build each qiskit package independently in a clean isolated environment but I do not use |
All test pass with 0.7.2 and qiskit 1.1.0rc1. |
Environment
What is happening?
I'm trying to improve the PKGBUILD for AUR and run the python tests in the
check()
function.The following tests fail in a clean chroot environment:
test/algorithms/regressors/test_qsvr.py::TestQSVR::test_change_kernel
test/algorithms/regressors/test_qsvr.py::TestQSVR::test_qsvr
test/algorithms/classifiers/test_fidelity_quantum_kernel_pegasos_qsvc.py::TestPegasosQSVC::test_save_load
(core dumped)The text was updated successfully, but these errors were encountered: