The first Multibeams were developed in the 1970s by the U.S. Navy, with scientific institutions adapting the technology soon after. Multibeam analysis is generally used to swath the floor of the ocean in order to discover surface structures such as sediment ridges and shipwrecks.
One of the reasons why multibeam measurements are a relatively recent development is that their principle of operation relies heavily on precise measurements of various aspects and strong computing power: In multibeam measurements, an acoustic pulse is sent out to the ocean floor, where it is reflected towards the ship and received by the multibeam echosounder. From the travel times, the sea floor heights along the beam are computed. To do this accurately, the computer needs to adjust for the ships movement, which is usually measured by the components heave, pitch, roll, yaw, and heading. The exact values for this are measured by a sensor and transmitted to the computer in real time, where they are used to adjust for the offset.
One other important factor used to increase the precision of the results is the sound velocity under water:
Since various properties of sea water (mainly salinity) affect the sound velocity, measurements made using the standard sound velocity of 1500 m/s would falsify the results; it has to be measured continuously using other instruments so that the travel times can be adjusted. During this cruise, a Sound Velocity Profiler (SVP) was deployed along with the CTD and the measured values were used as an approximation for all multibeam measurements until the next deployment of the SVP. This approach was feasible in this case because the sea water properties in the area around Helgoland are fairly homogenous.
The waves travel outward laterally from the ship, such that data is collected from a “beam” (hence the name) perpendicular to the ship’s heading. To obtain meaningful images of the seafloor, the ship has to go back and forth in lines which are slightly shifted, overlapping on the outside so as to cover the entire ocean floor topography without leaving any points without data in between.
The most important factor for the resolution is the angle relative to the ship at which the acoustic waves are initially transmitted, the so-called “opening angle”. The higher it is, the higher the area that can be covered (i.e. the bigger the swath). However, as is often (if not always) the case in science, this bigger surface coverage comes at the expense of the resolution. Therefore, one always has to consider the types of features that are of interest in order to determine the optimum angle; if it is too large, the results might be useless, making the cruise a waste of time and money, if it is too small, the resulting image might look nicer, but the cruise is unnecessarily prolonged, again wasting time and money. So it is in the interest of both scientific progress and economic feasibility to choose the opening angle wisely.
Once the data is collected, it has to be processed using the program MB systems. To achieve this, the raw data first has to be transformed to an editable format (in our case from .mb58 to .mb59) and written in a data list, which is a text file containing the names and types of the files to be processed. Then, auxiliary files have to be created for those files in order to make working on them faster.
The program mbeditviz (part of MB systems) is then used to work on the data. It allows the user to see a 3D map of the data points in the chosen files. Usually, there will be some black/dark points and bands in the otherwise colored data (see figure below).
Figure 1: mbeditviz view (picture taken from “Eight steps to clean gridded data”)2
Those distortions are caused by noise in the data and have to be removed to obtain a clean image with clearly visible features. To do this, the user can choose a portion of the map to work on. All the individual data points will be shown in black and by zooming and rotating the view, one can identify spikes, which then have to be flagged to tell the program to disregard those points when gridding the final image. The computer also automatically flags some points if they are not consistent with certain quality criteria specified in data collection. Flagged points are generally shown in red. Two examples of the data point view are shown below:
Figure 2: Data point view with many pre-flagged data points due to low-quality data
Figure 3: Data point view with less spikes during data collection; most red points were flagged by the user
After all “bad” data points have been cleaned out this way, the processed files have to be saved using the mbprocess command. Finally, a grid is created using the mbgrid command. Here, a variety of parameters, such as interpolation methods, resolution, or color scheme can be chosen to obtain the desired output. Finally, the data is plotted using the mbm grdplot command, which will output the final image. In general, the data processing method strongly followed the guidelines given in “Eight steps to clean gridded data”2
The multibeam collected data almost continuously throughout the cruise, covering a large area around Helgoland, which can be seen in the figures below showing all the data collected during the 3 days (and therefore also the ship’s path). All 3 plots used the Minimum Filter interpolation.
Figure 4: Data from day 1 (10.04.2012), 5m grid spacing
Figure 5: Data from day 2 (11.04.2012), 5m grid spacing
Figure 6: Data from day 3 (12.04.2012), 3m grid spacing
The main purpose of the multibeam on this excursion was to obtain data of ridges on the sea floor close to Helgoland. To investigate them, the ship went back and forth in lines as mentioned in the Theoretical background section. There are 3 areas in which this was done: The first one was from 54° 08’ 46” to 54° 08’ 54” North and 7° 58’ to 8° 01’ East, the second one from 54° 09’ to 54° 11’ North and 7° 51’ to 7° 53’ East, and the third one from 54° 10’ 28” to 54° 10’ 36” North and 7° 59’ to 8° 02’ East.
All 3 areas are shown below. They all used the F3 interpolation method and a grid spacing of 1m.
Figure 7: First line (day 1)
Figure 8: Second line (day 2)
Figure 9: Third line (day 3)
The main purpose of the trip was to compare the evolution of sand ridges on the sea floor with respect to data from the last few years, determining whether they are at a constant position or whether they migrate across the sea floor. However, this discussion is not included in this report, because computing power issues made it virtually impossible to plot larger amounts of data (i.e. more than 1 file at a time) at a resolution lower than 1m. Since the ridges are very small and thin features, they could not be investigated properly with the equipment available. This problem will further be dealt with in the discussion section.
Four shipwrecks were found during this cruise; unfortunately, due to the aforementioned computer problems, two of these could only be plotted in a low resolution and were thus almost indistinguishable. Therefore, those images were omitted from this report and only the remaining two, which were plotted using the more powerful onboard computer, are shown in this presentation: One submarine around 1.7 km off the southern coast of Helgoland (coordinates: 54° 09' 32" N, 7° 53' 28" E) and one shipwreck 8.7 km to the East of Helgoland (coordinates: 54° 10' 35" N, 8° 01' 32" E) which was nicknamed butterfly due to the sea floor structures around it. Both images were created using the Gaussian Weigthed Mean (F1) algorithm for interpolation. The spatial resolution was 20cm for the submarine and 30cm for “butterfly”.
Figure 10: Submarine
Figure 11: Butterfly
One of the problems in processing the data is that MB systems is a very memory intensive program; with the computer used on the first two days of the cruise, it was a very cumbersome process to work on the data, with the program frequently crashing due to memory shortages. Therefore, only very little (the two images of ship wrecks) could be done during the trip itself, mostly on the third day of the cruise, when a more powerful computer that allowed for a better resolution and faster loading times was used.
The same memory problem occurred in an even more severe fashion when processing the data after the trip to obtain the overview and line plots found in the results section. Since the only equipment available was a regular office laptop, the memory was again much lower than what would have been required to run MB systems smoothly. Therefore, most of the plots produced have a resolution significantly below what would have been ideal; producing a plot of all 3 days of the cruise at once was not possible at any reasonable resolution (5m or smaller). Therefore, the days had to be regarded individually rather than in a composite overview plot, taking away the possibility to correlate results from the same location on different days. Moreover, even those separate plots for each day could not be plotted at resolutions lower than 5m (except the third day, which was plotted at 3m) since the program crashed otherwise, making it hard to distinguish the ridges from the ambient sea floor even in those smaller maps. The areas chosen for the plots in the Lines/Sea floor ridges subsection are largely based on where ridges could be distinguished in mbeditviz, where a more reasonable resolution (1m) could be used. However, mbeditviz also didn’t allow for more than 3 files to be opened at the same time.
Overall, it was very hard and cumbersome to detect large scale features such as the ridges in any of the different view modes, since only very few files could be examined simultaneously while still maintaining an adequate grid spacing. Therefore, it is very well possible that the plots provided in this report are not the optimum way to visualize the most important parts of the data collected during the cruise.
In an ideal case (that is, with a decent computer), one would have first cleaned all the spikes from the data using mbeditviz (as was also done here) to then create a grid from all the data collected throughout the cruise and plot an overview map at a relatively high resolution, such as 1m or even 50cm. This would then have allowed to use the -R option of GMT, or rather the mbm grdplot function which calls a GMT routine, in order to get images of smaller areas showing features such as the ridges or shipwrecks found within the gridded area.
With the equipment used however, it was only possible to make a higher (or at least not too low) resolution grid of ~5 files at the same time, so the areas to produce plots of had to be chosen experimentally, which, as mentioned before, left plenty of room for errors and misjudgements.
Since no real features could be determined using MB systems with the given computing power, two images generated by Professor Rossi were laid on top of each other, with the upper layer - this year’s data – being displayed in color at 50% transparency and the previous data shown in grey scale.
Figure 12: Ridge data from 2012 (in color, 50% transparent) on top of previous data4
From this image, it can easily be seen that even though the shape of the ridges changes slightly from year to year, their general position and height stays more or less constant. This is also what we expected to see given the results of previous campaigns.
Another important conclusion, though not related to data evaluation, is that for future cruises it would be advisable to provide a more powerful computer right away to prevent struggling with the low-performance machine for the first one and half days.