k-Wave User Forum » Topic: Precission error at 1024x1024
http://www.k-wave.org/forum/topic/precission-error-at-1024x1024
Support for the k-Wave MATLAB toolboxen-USFri, 04 Dec 2020 19:56:14 +0000http://bbpress.org/?v=1.0.2<![CDATA[Search]]>q
http://www.k-wave.org/forum/search.php
bISHOP on "Precission error at 1024x1024"
http://www.k-wave.org/forum/topic/precission-error-at-1024x1024#post-32
Wed, 21 Jul 2010 10:27:21 +0000bISHOP32@http://www.k-wave.org/forum/<p>I´ve replicated the error and it is a memory error, as you say. The interpolation is set to nearest, as you suggest, so my I think my results will be fair enough to demonstrate that we should adquire a Tesla or similar. I´m going to install latest MATLAB in a desktop machine (quad core, 4GB and Zotac GT240) and I will repeat the measures to compare GPUs and multi-threading.</p>
<p>Thanks again.</p>
<p>Daniel.
</p>Bradley Treeby on "Precission error at 1024x1024"
http://www.k-wave.org/forum/topic/precission-error-at-1024x1024#post-30
Wed, 21 Jul 2010 09:47:46 +0000Bradley Treeby30@http://www.k-wave.org/forum/<p>Hi Daniel,</p>
<p>Unfortunately I have not been able to replicate this error, however, it is likely your PC is running out of memory when it tries to compute the Delaunay triangulation on a matrix of this size. This triangulation is computed once in advance and then continually used to calculate the pressure at Cartesian sensor coordinates that do not lie exactly on the rectangular grid using linear interpolation. You could try to use the MATLAB delaunay function outside of k-Wave to see if you get the same problem using a 1024x1024 matrix.</p>
<p>When running simulations on the GPU, if you have <code>'CartInterp'</code> set to <code>'nearest'</code>, the Delaunay triangulation is not calculated, so you wouldn't see the error.</p>
<p>To make the comparison between GPU and CPU fair, I would suggest using the options <code>'CartInterp', 'nearest', 'DataCast', 'single'</code> for the simulations run on the CPU. You would also see a slight speed-up when using the CPU if you used a later version of MATLAB that supports multi-threading.</p>
<p>The GPU speed is also quite dependent on the card. For example, I can run a simulation on a 128x128x128 grid with 1000 time steps in around 1 minutes 30 seconds on a Tesla T10 and 6 minutes on a Quadro FX 3700. This is compared to about 18 minutes 30 seconds on a fast quad-core PC.
</p>bISHOP on "Precission error at 1024x1024"
http://www.k-wave.org/forum/topic/precission-error-at-1024x1024#post-28
Wed, 21 Jul 2010 09:16:33 +0000bISHOP28@http://www.k-wave.org/forum/<p>Hi again.</p>
<p>I'm testing GPU simulations against CPU ones. I´ve modified only the size of the k-space for the same configuration in all tests. I´ve tried 128, 256, 512 and 1024, apart from other sizes to test non-power of two yield. I´m doing this to prove my boss that we should buy some CUDA compatible hardware, so it is just orientative.</p>
<p>Simulations for GPU at 128, 256, 512 and 1024 went right, but for CPU only, MATLAB crashes at 1024 with this message:</p>
<pre><code>Running k-space simulation...
WARNING: visualisation plot scale may not be optimal for given source
dt: 20ns, t_end: 96.54us, time steps: 4828
input grid size: 1024 by 1024 pixels (102.4 by 102.4mm)
maximum supported frequency: 7.5MHz
smoothing p0 distribution...
smoothing density distribution...
calculating Delaunay triangulation...
??? Error using ==> qhullmx
While executing: | qhull d Qt Qbb Qc
Options selected for Qhull 2003.1 2003/12/30:
delaunay Qtriangulate Qbbound-last Qcoplanar-keep _pre-merge
_zero-centrum Pgood Qinterior-keep _max-width 0.1 Error-roundoff 1.4e-016
_one-merge 9.9e-016 Visible-distance 2.8e-016 U-coplanar-distance 2.8e-016
Width-outside 5.7e-016 _wide-facet 1.7e-015
Last point added to hull was p276113.
Last merge was #1742388.
Qhull has finished constructing the hull.
At error exit:
precision problems (corrected unless 'Q0' or an error)
1742388
coplanar horizon facets for new vertices
1
degenerate hyperplanes recomputed with gaussian elimination
1
nearly singular or axis-parallel hyperplanes
Error in ==> delaunayn at 117
t = qhullmx(x', 'd ', opt);
Error in ==> gridDataFast at 55
tri = delaunayn([x y]);
Error in ==> kspaceFirstOrder2D at 952
[zi del_tri] = gridDataFast(kgrid.z, kgrid.x, p, sensor_z, sensor_x);
Error in ==> ej211 at 71
sensor_data = kspaceFirstOrder2D(kgrid, medium, source, sensor, input_args{:});</code></pre>
<p>And I don´t have any idea of where the error is. I think it is interesting because GPU handles it and I just expected eons in the CPU simulation time, but not a crash.</p>
<p>Thank you in advance.</p>
<p>Daniel.
</p>