There are two types of artificial intelligence. The kind that is most well known is more correctly referred to as 'artificial general intelligence', while the AI that is most common (and arguably most useful) is 'artificial specific intelligence'. While both have their uses, as it turns out one of them is a lot more practical than the other.
This is the kind of AI that runs modern life, and is completely unobtrusive. ASI is, in essence, advanced learning algorithms and neural networks that have been trained to be incredibly good at a single task. For example, Fusion Reactors required nanosecond response times to control the plasma flow and the magnetic bottle. It's not possible for any known sentient creature to control all the required variables fast enough. It turns out that most sentient creatures evolve to handle vastly different threats than what controlling a fusion reactor requires.
This is where ASI comes in. It's possible to build an ASI that is good at handling these problems. Now, in actual fact, a fusion reactor is controlled by dozens of ASI, all working in concert. And despite centuries of science fiction, combining ASIs cannot, in fact, lead to a 'sentient computer' or 'singularity' event. For something like that, a general intelligence would be required.
ASIs are used in every facet of modern technology, by all races. Neural networks are one of those fundamental technological building blocks that without, it simply isn't possible to make some achievements. Subspace travel, for example, is simply impossible without heavy usage of ASIs all handling the various required variables.
An interesting area of research has been in using ASIs to predict quantum phenomena. ASIs have been very successful in predicting the macro behavior of things that are actually quantum in nature, however, they've showed an unexpected behavior when working with quantum effects directly. Almost every ASI will correctly predict quantum effects with about 60% accuracy. It doesn't matter how well trained (or not) the ASIs are, 60% seems to be the magic number. Currently the reason for this is unknown.
Building an artificial general intelligence, on the other hand, is incredibly difficult. An AGI, while the subject of a lot of excitement (or trepidation) for most sentient races, also turns out to be very impractical. As much as the idea of a hyper-intelligece managing your daily life might sound useful, it turns out that AGIs simply aren't hyper intelligent. The reason for this is obvious, after a moment of contemplation.
All sentients are, themselves, "general intelligences". And while some are more intelligent than others, they aren't by orders of magnitude. Instead, all general intelligences tend to be about the same level of intelligence as each other, because there's a trade off. The better an AI is at solving one problem, the worse it is at solving other problems. So the more general the intelligence, the more it is a series of compromises.
What's worse than that, is the fact that by it's very definition, an AGI is only capable of solving the same problems another sentient is. It's no better at tasks than it's creators, because it's creators invariably pattern it after themselves. A perfect AGI would be indistinguishable from a member of the species that created it. Now consider the implications of taking a sentient, removing it's ability to exerience or interact with the physical world. Remove it's ability to engage with others of it's species. Now, force it to do mundane or repetitive tasks. What you're describing would be considered by most species as "cruel and unusual punishment".
After enough time working on the problem, most races come to the same conclusion. What they want is a series of ASIs that give a facsimily of an AGI, but isn't truly self-aware. They want a solid communicative interface to what is, in essence, a computer that will run tasks and collate data for them. Rarely do they want something making decisions for them; they just want to be presented with data in an easy to understand manner, and they want to make the decisions.
However, the utility of such "smart assistants" is inherently limited. Because of this, several races, like Humans have found a compromise. They've created AGI/ASI hybrids that are not true AGIs, but instead are semi-sentient, semi-general intelligences capable of a very limited amount of decision making, and lean heavily on ASIs and predictive models to attempt to execute their duties within very narrow confines.
One of the advantages of this approach is that they side-step the ethical issues. These "AI" are no more self-aware than a house cat, but are intelligent in just the right ways that they can make simple decisions. They are not a replacement for an actual sentient, but they can be meaningfully helpful in their limited way.
While AGIs are basically non-existent, the simplistic AGI/ASI hybrids are used in a limited form by multiple races and factions. Most are used as research assistants; it's rare for a serious labratory not to have one or two "AI"s on staff. Most militaries have found them unsuitable for combat use, though both the League and Terrans have experimented with them from time to time.
The other place where these hybrids have found usage is on civilian craft. Since they experience stressful situations with less frequency, they're more useful. Especially for merchants that are operating with single person crews for long distances. Having even a simplistic companion can greatly improve the mental strain of such journeys.
When one throws the ethical problems of AGIs out the window and mixes it with a complete derth of morality, an obvious solution to the problem of AGIs presents itself. Why limit one to the mechanical? Why not simply grow a self away organism? (Most species do this naturally, after all.)
For the obvious reasons, very few races go this route. The ethical implications are generally too much of a barrier for them. There have been, however, a few exceptions.
Due to the harsh nature of their environment, the Grey developed organic technology and ships very early into their spacefaring career. For them, "artificial" intelligence made little sense. If they needed to make something intelligent enough to think they simply did that. Considering they used themeselves as their own technology they implicitly accepted the moral implications of this.
When they became a space-farring species, they grew their ships. However, when they realized those ships needed intelligence, they also realized that growing their own might not be enough. Instead, they created a core for their ships that could take one of their own and make it the organic intelligence for the ship. As a side effect, they would become immortal. They viewed this as an honor and only their best and brightest were given the opportunity.
Created as part of the Dante project, Rekonin may be the only true AGI known at this time. While her creation involved the fabrication of a biological entity to introduce to the modified Grey Core at the heart of the Rekonin ship, she was never conscious before the integration, so it can be argued she's a compeletly synthetic intelligence.
Her abilities are very similar to what the Grey Core allowed, but for mostly mechanical ships.
The AI Nil that runs Lizbeth Locke's ship is another attempt to recreate what the Grey Core, by Dr. Canal. While Dr. Canal considered it a failure, that didn't mean that it did work, instead that it didn't meet her expectations.
Nil describes herself as a 'brain in a jar' and her 'core' is simply a brain hooked up to a life support system and a technological interface. It's unknown if she ever existed outside of this form, and as such, had a regular body. Still, her 'brain' is a typical, unmodified Lyndri brain, which makes her situation closer to extreme body modification instead of true AGI.