<?xml version="1.0" encoding="UTF-8"?>
<!-- generator="bbPress/1.0.2" -->
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom">
	<channel>
		<title>k-Wave User Forum &#187; Topic: Large-scale-simulation On 8 A100 GPUS</title>
		<link>http://www.k-wave.org/forum/topic/large-scale-simulation-on-8-a100-gpus</link>
		<description>Support for the k-Wave MATLAB toolbox</description>
		<language>en-US</language>
		<pubDate>Tue, 12 May 2026 22:25:04 +0000</pubDate>
		<generator>http://bbpress.org/?v=1.0.2</generator>
		<textInput>
			<title><![CDATA[Search]]></title>
			<description><![CDATA[Search all topics from these forums.]]></description>
			<name>q</name>
			<link>http://www.k-wave.org/forum/search.php</link>
		</textInput>
		<atom:link href="http://www.k-wave.org/forum/rss/topic/large-scale-simulation-on-8-a100-gpus" rel="self" type="application/rss+xml" />

		<item>
			<title>Pavel on "Large-scale-simulation On 8 A100 GPUS"</title>
			<link>http://www.k-wave.org/forum/topic/large-scale-simulation-on-8-a100-gpus#post-8928</link>
			<pubDate>Fri, 06 Oct 2023 11:51:32 +0000</pubDate>
			<dc:creator>Pavel</dc:creator>
			<guid isPermaLink="false">8928@http://www.k-wave.org/forum/</guid>
			<description>&#60;p&#62;I am not a computer guy, but it looks like GPU apps use GPU memory plus half of CPU RAM memory, and it looks like default configuration of my system. For example, having threadripper architecture of motherboard I sum up half of my 256 GB RAM with 40 GB of my 4090 by rather fast (ddr5) and cost effective way (cheaper than buying four 3090 units in parallel). Maybe 3DG version of kWave allows for such CPUtoGPU memory sharing as well?
&#60;/p&#62;</description>
		</item>
		<item>
			<title>so_dence on "Large-scale-simulation On 8 A100 GPUS"</title>
			<link>http://www.k-wave.org/forum/topic/large-scale-simulation-on-8-a100-gpus#post-8916</link>
			<pubDate>Sat, 23 Sep 2023 14:09:52 +0000</pubDate>
			<dc:creator>so_dence</dc:creator>
			<guid isPermaLink="false">8916@http://www.k-wave.org/forum/</guid>
			<description>&#60;p&#62;@Jiri Jaros&#60;br /&#62;
Hi Jiri Jaros&#60;/p&#62;
&#60;p&#62;    First of all,Thanks for your reply!And I am deeply interested in the multi-GPU version,could you please send me one beta version!I am deeply grateful with that!
&#60;/p&#62;</description>
		</item>
		<item>
			<title>jamesjc on "Large-scale-simulation On 8 A100 GPUS"</title>
			<link>http://www.k-wave.org/forum/topic/large-scale-simulation-on-8-a100-gpus#post-8910</link>
			<pubDate>Thu, 14 Sep 2023 17:03:30 +0000</pubDate>
			<dc:creator>jamesjc</dc:creator>
			<guid isPermaLink="false">8910@http://www.k-wave.org/forum/</guid>
			<description>&#60;p&#62;@Jiri Jaros,&#60;/p&#62;
&#60;p&#62;Are you and your team still on track to release a multi-GPU version this month? Do you have an updated timeline?&#60;/p&#62;
&#60;p&#62;We're very keen to use it :)
&#60;/p&#62;</description>
		</item>
		<item>
			<title>quetaijiangchu on "Large-scale-simulation On 8 A100 GPUS"</title>
			<link>http://www.k-wave.org/forum/topic/large-scale-simulation-on-8-a100-gpus#post-8834</link>
			<pubDate>Wed, 14 Jun 2023 09:52:03 +0000</pubDate>
			<dc:creator>quetaijiangchu</dc:creator>
			<guid isPermaLink="false">8834@http://www.k-wave.org/forum/</guid>
			<description>&#60;p&#62;So looking forward to the multi-GPU version
&#60;/p&#62;</description>
		</item>
		<item>
			<title>Jiri Jaros on "Large-scale-simulation On 8 A100 GPUS"</title>
			<link>http://www.k-wave.org/forum/topic/large-scale-simulation-on-8-a100-gpus#post-8791</link>
			<pubDate>Tue, 06 Jun 2023 12:49:31 +0000</pubDate>
			<dc:creator>Jiri Jaros</dc:creator>
			<guid isPermaLink="false">8791@http://www.k-wave.org/forum/</guid>
			<description>&#60;p&#62;k-Wave is still running only on a single GPU, however, the multi-GPU version is almost finished and will be released in Sept 2023.&#60;/p&#62;
&#60;p&#62;Anyway, 1792 * 1792 * 1792 is far too big, it would consume about 650GB of GPU memory. With 8 A100, 40GB of memory, our alpha code was able to run up to 1280^3
&#60;/p&#62;</description>
		</item>
		<item>
			<title>so_dence on "Large-scale-simulation On 8 A100 GPUS"</title>
			<link>http://www.k-wave.org/forum/topic/large-scale-simulation-on-8-a100-gpus#post-8723</link>
			<pubDate>Wed, 08 Mar 2023 15:59:52 +0000</pubDate>
			<dc:creator>so_dence</dc:creator>
			<guid isPermaLink="false">8723@http://www.k-wave.org/forum/</guid>
			<description>&#60;p&#62;Zhao Xiang,You can get the re-compiled cuda exe on this address——https://github.com/InesConceicao/Binary-k-Wave.git,But it is a linux version.&#60;/p&#62;
&#60;p&#62;By the way,can we have some communication on personal email?My email is &#60;a href=&#34;mailto:hexu92026@gmail.com&#34;&#62;hexu92026@gmail.com&#60;/a&#62;.&#60;/p&#62;
&#60;p&#62;I hope to talk to you as soon.
&#60;/p&#62;</description>
		</item>
		<item>
			<title>zhaoxiang on "Large-scale-simulation On 8 A100 GPUS"</title>
			<link>http://www.k-wave.org/forum/topic/large-scale-simulation-on-8-a100-gpus#post-8711</link>
			<pubDate>Tue, 07 Mar 2023 03:02:01 +0000</pubDate>
			<dc:creator>zhaoxiang</dc:creator>
			<guid isPermaLink="false">8711@http://www.k-wave.org/forum/</guid>
			<description>&#60;p&#62;Hi, I have the same problem. Could you please tell me the website of the re-compiled C++ cuda execute program on github. Thanks a lot in advance!
&#60;/p&#62;</description>
		</item>
		<item>
			<title>so_dence on "Large-scale-simulation On 8 A100 GPUS"</title>
			<link>http://www.k-wave.org/forum/topic/large-scale-simulation-on-8-a100-gpus#post-8709</link>
			<pubDate>Mon, 27 Feb 2023 08:15:42 +0000</pubDate>
			<dc:creator>so_dence</dc:creator>
			<guid isPermaLink="false">8709@http://www.k-wave.org/forum/</guid>
			<description>&#60;p&#62;Hi,the k-wave toolbox is very nice.&#60;/p&#62;
&#60;p&#62;    But I'v recently got some problem about simulation in a very large scale,&#60;/p&#62;
&#60;p&#62;    First of all,while using kspaceFirstOrder3DG to simulate,I could get sensor_data with the k-space gird setting 128 * 128 * 128.However,I got a error &#34;Not enough memory to run the simulation&#34; with the k-spece grid setting 1792 * 1792 * 1792.I have 8 GPUs on my computer with 40GB memory per each,I think k-wave only used one GPU without using other 7.Is there any way for me to use all the 320 GB memory on 8 GPUs to simulate one k-wave?&#60;/p&#62;
&#60;p&#62;    Second,I can use kspaceFirstOrder3DG to simulate my own k-wave code correctly on windows.However there would be some errors when running on linux,I guess k-wave 1.4 toolbox with c++ 1.3 has not compiled with A100 yet.I failed to re-compile the C++,Luckily,I got a re-compiled C++ cuda execute program on github and then it could run correctly. And I wish if the C++ code could have some new GPU-version support.&#60;/p&#62;
&#60;p&#62;    I would really appreciate it if you can help me with the First problem!
&#60;/p&#62;</description>
		</item>

	</channel>
</rss>
