Software created in this Project There is a Repository on Github with all the software code created in this project hereIncludes software for:Comparative Pathology WorkbenchGut Cell Atlas ViewerGut Cell Atlas RendererGut Cell Atlas Model Image Processing Scripts Videos We have produced Videos explaining the GCA Model Integrator, including the Radiology Integration Pathway.GCA Model Integrator The GCA Model Integrator Overview View media transcript The purpose of this video is to introduce you to the web application which has been developed at both Heriot Watt and Edinburgh University as part of the Human Gut cell Atlas Project, which is funded by the Helmsley Trust. It would be useful if we just read on the about page the actual goals of the project. The primary abstraction of the gut representing both the small and large intestines is that of a tube connecting the stomach to the anus. Location is captured in terms of distance along the centre line of the tube to anatomical landmarks as measured, for example, by the use of an endoscope during a colonoscopy. Each of the 1D, 2D, and 3D models developed in Edinburgh represent a special context in which data locations can be visualised and queried. A location within a model is defined by the proportional distance along the midline path between the closest proximal and closest distal landmarks. What we've had to try to do in this application is to use this information and to show how we can query datasets using these 1D, 2D, and 3D models. 1D Viewer Controls and Public Access View media transcript The purpose of this video is to introduce you to the web application which has been developed at both Heriot Watt and Edinburgh University as part of the Human Gut cell Atlas project, which is funded by the Helmsley Trust. If we go up to the menu up at the top here, under database, you'll see six options. The first three options can be accessed without logging in. The latter have to be logged in in order to view these pages. So if we go to the first option, which is submit, and this is how we submit information into our database. You can see that this is the gut represented as a 1D model, large intestine along the top, small intestine along the bottom. We were able to position our region of interest by dragging this red box. Along either of these lines or clicking and dragging. We can alter the dimensions of that region of interest by going to this icon. Here we can change the width and we can change or we can place it specifically if we want to give it specific coordinates. Once we're satisfied that's where we want it, we enter information defining who the submitter is and some information about the sample. You can see all fields that have an asterisk are required. If you haven't logged in, but you have submitted something before, when you start typing your email address, it'll bring up your previous details. For example, I've entered something before under this name and it'll bring back all this information I've entered previously. We then go and select the sample type. Depending on the sample type that we choose, we get other options that may pop up. For example, sample method, the imaging type, we could select something with the sample method. We can multiple select within the imaging type. The patient type is disease type. If your disease does not appear here, you can click Other and you can enter your new disease that you want to register with us and whatever you enter there will subsequently come up within this drop down. For example, if we put in Crohn's disease, and then the total number of samples related to this submission, there are five and the set of these five, if we know what they are, we can enter some additional notes in here if we want. When we're ready and happy with that, we can submit. When we submit the data, you'll see we get taken to the browse page. Now, this browse page is also accessible under the database menu under browse, and you can see the entry at the top is the one that we've just submitted. We have different columns. If you type into any of these boxes in the column that acts as a filter. For example, I can take all these ones, and also we can sort using these bidirectional arrows. The last column here, we can expand to see some further information, more specifically about where that sample was from. All the rows we could see in this result set are just submissions which have been made without the person logging in. If you're not logged in, you will only see other entries which have been made by other users who haven't logged in. We can add or remove different columns by using the column filter. Okay. That's how we browse through the data. It's worth noting that in the 1D model, we can also expand it to show what we call the Zoom Viewer which is zooming in on the ROI. Giving you further details about it. As you can see as we move that, we can see that we've got some further information over here, the anatomy that we're moving through, the Uberon IDs of the anatomical types, and the cell types associated with that region of interest as well. This will be available anytime you see the 1D module appearing on any of the other pages. Another option we have that is accessible without having to login, again, uses the 1D model. But this demonstrates how we can use precise positioning in our 1D model to send that information to HuBMAP and to the registration interface. What happens when we do that? We need to supply some information for HuBMAP such as my name, Then when we submit that, if we're happy, it'll show how our 1D model has transformed our selection, our region of interest, and placed it in the 3D registration interface for HuBMAP. From there, you can move on to some further information, or you can take this information and register it with HuBMAP itself. If we go back And then we go to the login page. You'll also see that on each page there is this information. And if you click on that icon, you'll be presented with some information about that page that you're on. So you can read more about how to work on the page and what you're seeing on the page. When we login, you'll be given a username, a password. Once we've entered that, if that's correct, we should see the project box being populated. If the project box doesn't populate, try tabbing out of the password field. Sometimes that will initialise the project. Select our project, select an institution I'm associated with two, I'll select Heriot Watt, and then I can login. This will take you to the options page and you can see on the options page further links to what you're able to see, view, edit on this web app. Since I'm logged in as sysadmin, I have access to everything. Other people with different roles will see different things associated with that role. You can always return to this page by going up to the options link up here, which will be available as long as you're logged in. The Integrated Query Viewer View media transcript In this video, we're going to look at the power of the 1D model, enabling us to perform complex queries across multiple datasets and integrate them into a result set. In order to do this, we must first login through the login menu. Once we're logged in, we get taken to this options page. You can always return to this options page by going to the options link up here. For now, we're going to go to the database menu and click on Integrated Search. You can see we're going to be presented with the 1D model, and by default, it has already brought us back a result set based on the ROI, which you can see here in the large intestine. We can drag the ROI to a different location or click at a different location, and that will automatically perform a search. You can see that there are multiple data sources that are being returned here, for example, Heriot Watt University, HuBMAP Teichman Lab, and they are ordered by a Jaccard index. The Jaccard index measures the proportion of overlap between the ROI, which we select up here and the ROI which we have stored in the database. To pinpoint the location of the data, we can expand each row by clicking on this arrow on the far right column. This will expand the row and allow us to visualise more precise information regarding the location of that entry in the database. This location information is also reflected in the accompanying graphic. We can expand further rows, and by doing so, that allows us to have some useful visual comparisons between the location of different entries. You can also see that there are links which appear in red for some of the data. For example, for the Teichman lab, if we follow the link there, We can see that it takes us to an external resource where we can drill down and get further information about that particular sample that's recorded. We go back to our results set again for HuBMAP. We have two links here for HuBMAP. One of them again will take us to an external information set. Again, we can drill down and find more details about that dataset, or we can look at how that is represented in the HuBMAP registration interface itself. There we go. That's looking at the data that we've taken from HuBMAP and stored it in our database, and we're just verifying that this is what it looks like back in the HuBMAP database. So that gives us quite a useful tool to be able to precisely search on location. We will be building the 2D and 3D models into this interface, so you can query by using the 2D and 3D views in addition to the 1D viewer. And there will also be an additional video on how you can use the filters and also the meaning of the heat maps which are displayed. But for now, that's all. Heatmaps and Filters for the Integrated Viewer View media transcript We return to the 1D viewer and it's results set. But in addition, this time, we have heatmaps added into the page. I've selected my region of interest here and done a search. These are results that come back. If I expand the heatmaps, I can see there are three rows giving me information. The first one is the total number of samples, in this case, the large intestine. The legend for the number available in the third row down here. Just be aware that the numbers underneath the lines are positional numbers. They're not the number of samples, they're the position of the landmarks within the tissue. The top line gives us the total number of samples in this case, the large intestine, and the middle line gives us the actual samples in the query region, number of samples in the query region, the query region being the red ROI box. In addition to that, if we click on a row, we will see that a black box appears in the second heatmap here and that black box pertains to this particular dataset on the row that you clicked. There's also a pop up when you mouse over, any part of the heatmap that will give you additional information as to what is present at that point in that region. Once we've done our search, if we come down into the result set, we have these filters on the column headers, which operate slightly differently than the filters on previous pages. In these filters, we can multiple select what we want to restrict the result set to. For example, we just select one institute, select and we are restricted to that set of data, or we can add another filter in there. We can see we get the normals back as well. Then we can add another university in there. Let's see if we go to page two, we can see that there as well. It just gives a bit more flexibility over the filtering. Also, of course, we have the column filter, which there are currently one, two, three, four, five, six, seven, eight columns on display, but there are other columns available, and we can choose whether to view or hide them by clicking on and off of the chat boxes. For example, we don't see that. We won't see that instead. There we go. Okay. That's all for just now. Hope that's helpful. Histopathology Data and Omero View media transcript In this part of the video, I'm just going to show you the histopathology data that's been collected by Michael Glinka and team at Edinburgh University. If we go up to the database menu and look at tissue analysis, you'll see a browse page coming up that looks through all the data that Michael submitted so far. This works similar to the other data tables we've looked at. We have filtering on these columns, and we have sorting on the columns as well. We can also expand the columns to find some further information about them. You can see in the sections column, there's a link to show the sections associated with that tissue block. And also in the image and dataset columns, if there are numbers present in there, they signify the IDs of the image or the dataset in Omero. Now, I've added these in myself for purposes of this demo because I have an Omero server running here. If you don't have an Omero server running on the server that the web apps hosted on, then these links will not be active. But if we look at the, for example, on this image, That will take us into Omero and there we can see the image as it exists within Omero. If we want to look at the dataset that's in, again, that takes us to the whole dataset, and this is the Omero client interface, and we can see the different images that are associated with that particular dataset. If we are logged in with a particular access and we go to the database tissue analysis, we can see that we have editing opportunities These are showing up in the last two columns on the right. If we look at the section data, for example, here, and we see here that we have Omero IDs for image and dataset, and we go and click the edit button in the section data. If we've got Omero installed, we'll see not only the metadata for the section, but we can see the Omero data. This is actually looking straight into the Omero database. We can see the thumbnails of the images that are there. And also we have the ability to annotate. We can annotate with tags. For example, we've tagged this as healthy and we can add key value pairs, which may help us in doing refined searches. That's tapping into the Omero database. We can look further at the Omero database. If we go to the 'Omero Search' again, this will only work if we have Omero installed in the server. This is just an example to show how we can use, again, the 1D model to query different datasets, one of them being Omero. We'll go into the Omero database, search these tags and annotation values that we looked at previously. If we do search on there, it'll bring back nothing from Omero. In that case, let's try one in here. Here. There we go. It's found Omero there. This is just a demonstrator showing how we can integrate Omero data as well using the annotations on the images. 1D, 2D and 3D Models View media transcript On this page, we can see not only the 1D model, which we've looked at in other videos, but also how it's related to the 2D and 3D models. The 2D model here is an anatmogram and there are 4, no 5, 3D models. Inflated gut and non inflated gut, CT scans from Edinburgh, and mappings to HuBMAP female and HuBMAP male. We can see that they are linked to the 1D viewer by scrolling the region of interest, and if you look in the models themselves, you can see that the relevant area moves. See the disc here in this model moves to the appropriate place. It also works the other way around. If you click in the model, it'll change the ROI in the 1D model. We can also rotate these. Well, not the 2D, but the 3D, we can rotate. If you have a scrollable mouse, you can zoom this in. I don't have a scrollable mouse at the moment. You can zoom in and out. We also have a little selection box at the side. If you just wanted to see a selection of them, let's just look at one. For example, that'll give us a clear picture. Well update that. There we go. We got a better view of it now. Let's see that moving as we move along here. Here we go. Now, not only do these talk to each other so we can get different perspectives on where we are within the gut, but also we can perform our search function. Here we go. Again, we can either use the three D model or the 1D model, or in fact, the 3D models to select our area of interest and then perform a search and get the results back. Radiology Integration View media transcript Welcome to this latest video in which we're going to demonstrate how a radiologist might upload some images into the integrator web app and annotate them specifically with location information, save them to the database, and then be able to view them as part of an integrated search alongside other datasets, and also be able to export a set of selected entries to the CPW, that's a Comparative pathology workbench. To begin that process, first of all, we're going to login. Login as myself for the purpose of this demonstration. Once we login, we get taken to this options page. We can always return here by going to the options link up at the top right here. But for now, we're going to click on annotate Radiology Images. It will take us to this page where we see some annotation fields at the top here. We see the 1D viewer with the expanded Zoom view below it, and also we see a 3D model. The 3D model is tied to the 1D model, and you can see when we drag the ROI, in the 1D model, you can see the location changing appropriately in the 3D model. If we scroll down to the bottom, we can see the image library for the radiology images. We're currently got four images in there. So we're going to upload a new image and annotate that. First of all, we select the image directory. We're going to upload two. We have an option of choosing files from our local computer or entering a URL for the image. In this case, we're going to choose files from our local computer and we're going to choose that one. We can see it present here, and then we're going to click Upload. That image is uploaded successfully. If we scroll down now, we can see our fifth image appearing down here. Now, what I'm going to do is I'm going to select that one to annotate. If I click on that, you'll see that an expanded view appears below it. At some future date, we may introduce some drawing facility here where we can overlay this original image with a user's information in terms of subsequent annotations, in terms of shapes or text. For just now, we're just going to proceed to submit, which actually just takes us back to the top of the page, and here we enter some annotation information. You can see because I've logged in, it's already captured my login data. For just now, I'm going to Select CT MRIs. The options, which I have an asterisk next to it, means it has to be selected. The patient type. I don't actually know what it is, so I'm going to put I'm going to guess, there's one of them. I don't know the sex. We have options of unknown male and female, but I don't know what it is. We have an option to put in some notes, enter some of my notes, a description of the image, know that image is an oblique image. That's rather vague, but that's the information I've been given and the plane of that image is coronal. We can then choose the region of interest. We do that by sliding this red rectangle. This is a large intestine on the top, or if we click down here, that's a small intestine. We can change the width of that by clicking on this icon here. Make appropriate changes. Let's make it a bit bigger and save that. Now, I don't exactly know where this is on this image. I'm not a radiologist, so I'm just going to select that area of interest. Once I selected the area of interest, then I will proceed to click Submit. You can see now that I've successfully entered it into the database and it's appeared alongside the other images in my browser here. Here we can see the row at the top that we've just submitted. We can sort these rows by clicking the bidirectional arrows up here or we can actually enter what we want to filter on. If we click on this arrow at the end here, it expands to show us the exact location information from the region of interest. Because I'm logged in, and it's my image that I uploaded. I'm able to edit it or delete it or copy it. Then what we can do is we can select these images that we want to create a bench for in the Comparative pathology workbench. I've selected all these images, and I want to send them to the CPW. I select the bench. Actually, it's going to be a new bench I'm going to create, and then I'm going to click Export. I could update it. If I had already created a bench, then that bench would appear below and I could add to that bench. But in this case, I'm going to create a new bench. And then I select them and then export selections to the CW. There we go. Imported them into the CPW and they are these five images that we have been working with. Within the CPW, I've just got five images here at the moment. If I click on here, you can see you can actually view the collection that that came from. Here we go. These are all the images and all the information metadata associated with the images. I could create a new collection here if I wanted to. In this row, I may want to add a new collection to that row. And by building up your bench that way, you're able to visually make judgement calls on the differences between the images that are in your view. There's more information in the CPW to be had in the how to links here. If I go back to our integrator web app, now that I've entered these into the database here, I can perform an integrated search. You can see if I put my area of interest here, I'll come back with one of the radiology images alongside other datasets. This is a Jaccard index on the side showing the overlap between the region of interest that we have here and the region of interest that's stored in the database. If we click on the right arrow here, that'll give us exactly what the original region of interest was that we stored in the database, and we can do that on other rows for other datasets as well. Then by opening different rows, we can get a visual comparison of different locations areas of interest that have been entered into the database from these different datasets. We've also got some links available to us here. In terms of the radiology one, if we click on this link here, that'll take us to just shows that single entry. Again, we could just open that to get more information about it. Okay, there's a lot of work to be done here. I'm sure we could expand it to make it a lot more comprehensive, give back a lot more details. So hopefully we can move in a direction where that will be a useful tool. Thank you very much. This article was published on 2024-08-27