A quantization method for a plurality of partial sums of a convolution neural network based on a computing-in-memory hardware includes a probability-based quantizing step and a margin-based quantizing step. The probability-based quantizing step includes a network training step, a quantization-level generating step, a partial-sum quantizing step, a first network retraining step and a first accuracy generating step. The margin-based quantizing step includes a quantization edge changing step, a second network retraining step and a second accuracy generating step. The quantization edge changing step includes changing a quantization edge of at least one of a plurality of quantization levels. The probability-based quantizing step is performed to generate a first accuracy value, and the margin-based quantizing step is performed to generate a second accuracy value. The second accuracy value is greater than the first accuracy value. |