suyu/src/core/hle/service/nvflinger/nvflinger.cpp
Lioncash 6ac955a0b4 hle/service: Default constructors and destructors in the cpp file where applicable
When a destructor isn't defaulted into a cpp file, it can cause the use
of forward declarations to seemingly fail to compile for non-obvious
reasons. It also allows inlining of the construction/destruction logic
all over the place where a constructor or destructor is invoked, which
can lead to code bloat. This isn't so much a worry here, given the
services won't be created and destroyed frequently.

The cause of the above mentioned non-obvious errors can be demonstrated
as follows:

------- Demonstrative example, if you know how the described error happens, skip forwards -------

Assume we have the following in the header, which we'll call "thing.h":

\#include <memory>

// Forward declaration. For example purposes, assume the definition
// of Object is in some header named "object.h"
class Object;

class Thing {
public:
    // assume no constructors or destructors are specified here,
    // or the constructors/destructors are defined as:
    //
    // Thing() = default;
    // ~Thing() = default;
    //

    // ... Some interface member functions would be defined here

private:
    std::shared_ptr<Object> obj;
};

If this header is included in a cpp file, (which we'll call "main.cpp"),
this will result in a compilation error, because even though no
destructor is specified, the destructor will still need to be generated by
the compiler because std::shared_ptr's destructor is *not* trivial (in
other words, it does something other than nothing), as std::shared_ptr's
destructor needs to do two things:

1. Decrement the shared reference count of the object being pointed to,
   and if the reference count decrements to zero,

2. Free the Object instance's memory (aka deallocate the memory it's
   pointing to).

And so the compiler generates the code for the destructor doing this inside main.cpp.

Now, keep in mind, the Object forward declaration is not a complete type. All it
does is tell the compiler "a type named Object exists" and allows us to
use the name in certain situations to avoid a header dependency. So the
compiler needs to generate destruction code for Object, but the compiler
doesn't know *how* to destruct it. A forward declaration doesn't tell
the compiler anything about Object's constructor or destructor. So, the
compiler will issue an error in this case because it's undefined
behavior to try and deallocate (or construct) an incomplete type and
std::shared_ptr and std::unique_ptr make sure this isn't the case
internally.

Now, if we had defaulted the destructor in "thing.cpp", where we also
include "object.h", this would never be an issue, as the destructor
would only have its code generated in one place, and it would be in a
place where the full class definition of Object would be visible to the
compiler.

---------------------- End example ----------------------------

Given these service classes are more than certainly going to change in
the future, this defaults the constructors and destructors into the
relevant cpp files to make the construction and destruction of all of
the services consistent and unlikely to run into cases where forward
declarations are indirectly causing compilation errors. It also has the
plus of avoiding the need to rebuild several services if destruction
logic changes, since it would only be necessary to recompile the single
cpp file.
2018-09-10 23:55:31 -04:00

172 lines
5.8 KiB
C++

// Copyright 2018 yuzu emulator team
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#include <algorithm>
#include <boost/optional.hpp>
#include "common/alignment.h"
#include "common/assert.h"
#include "common/logging/log.h"
#include "common/microprofile.h"
#include "common/scope_exit.h"
#include "core/core.h"
#include "core/core_timing.h"
#include "core/core_timing_util.h"
#include "core/hle/service/nvdrv/devices/nvdisp_disp0.h"
#include "core/hle/service/nvdrv/nvdrv.h"
#include "core/hle/service/nvflinger/buffer_queue.h"
#include "core/hle/service/nvflinger/nvflinger.h"
#include "core/perf_stats.h"
#include "video_core/renderer_base.h"
#include "video_core/video_core.h"
namespace Service::NVFlinger {
constexpr size_t SCREEN_REFRESH_RATE = 60;
constexpr u64 frame_ticks = static_cast<u64>(CoreTiming::BASE_CLOCK_RATE / SCREEN_REFRESH_RATE);
NVFlinger::NVFlinger() {
// Add the different displays to the list of displays.
displays.emplace_back(0, "Default");
displays.emplace_back(1, "External");
displays.emplace_back(2, "Edid");
displays.emplace_back(3, "Internal");
// Schedule the screen composition events
composition_event =
CoreTiming::RegisterEvent("ScreenComposition", [this](u64 userdata, int cycles_late) {
Compose();
CoreTiming::ScheduleEvent(frame_ticks - cycles_late, composition_event);
});
CoreTiming::ScheduleEvent(frame_ticks, composition_event);
}
NVFlinger::~NVFlinger() {
CoreTiming::UnscheduleEvent(composition_event, 0);
}
void NVFlinger::SetNVDrvInstance(std::shared_ptr<Nvidia::Module> instance) {
nvdrv = std::move(instance);
}
u64 NVFlinger::OpenDisplay(std::string_view name) {
LOG_WARNING(Service, "Opening display {}", name);
// TODO(Subv): Currently we only support the Default display.
ASSERT(name == "Default");
auto itr = std::find_if(displays.begin(), displays.end(),
[&](const Display& display) { return display.name == name; });
ASSERT(itr != displays.end());
return itr->id;
}
u64 NVFlinger::CreateLayer(u64 display_id) {
auto& display = GetDisplay(display_id);
ASSERT_MSG(display.layers.empty(), "Only one layer is supported per display at the moment");
u64 layer_id = next_layer_id++;
u32 buffer_queue_id = next_buffer_queue_id++;
auto buffer_queue = std::make_shared<BufferQueue>(buffer_queue_id, layer_id);
display.layers.emplace_back(layer_id, buffer_queue);
buffer_queues.emplace_back(std::move(buffer_queue));
return layer_id;
}
u32 NVFlinger::GetBufferQueueId(u64 display_id, u64 layer_id) {
const auto& layer = GetLayer(display_id, layer_id);
return layer.buffer_queue->GetId();
}
Kernel::SharedPtr<Kernel::Event> NVFlinger::GetVsyncEvent(u64 display_id) {
const auto& display = GetDisplay(display_id);
return display.vsync_event;
}
std::shared_ptr<BufferQueue> NVFlinger::GetBufferQueue(u32 id) const {
auto itr = std::find_if(buffer_queues.begin(), buffer_queues.end(),
[&](const auto& queue) { return queue->GetId() == id; });
ASSERT(itr != buffer_queues.end());
return *itr;
}
Display& NVFlinger::GetDisplay(u64 display_id) {
auto itr = std::find_if(displays.begin(), displays.end(),
[&](const Display& display) { return display.id == display_id; });
ASSERT(itr != displays.end());
return *itr;
}
Layer& NVFlinger::GetLayer(u64 display_id, u64 layer_id) {
auto& display = GetDisplay(display_id);
auto itr = std::find_if(display.layers.begin(), display.layers.end(),
[&](const Layer& layer) { return layer.id == layer_id; });
ASSERT(itr != display.layers.end());
return *itr;
}
void NVFlinger::Compose() {
for (auto& display : displays) {
// Trigger vsync for this display at the end of drawing
SCOPE_EXIT({ display.vsync_event->Signal(); });
// Don't do anything for displays without layers.
if (display.layers.empty())
continue;
// TODO(Subv): Support more than 1 layer.
ASSERT_MSG(display.layers.size() == 1, "Max 1 layer per display is supported");
Layer& layer = display.layers[0];
auto& buffer_queue = layer.buffer_queue;
// Search for a queued buffer and acquire it
auto buffer = buffer_queue->AcquireBuffer();
MicroProfileFlip();
if (buffer == boost::none) {
auto& system_instance = Core::System::GetInstance();
// There was no queued buffer to draw, render previous frame
system_instance.GetPerfStats().EndGameFrame();
system_instance.Renderer().SwapBuffers({});
continue;
}
auto& igbp_buffer = buffer->igbp_buffer;
// Now send the buffer to the GPU for drawing.
// TODO(Subv): Support more than just disp0. The display device selection is probably based
// on which display we're drawing (Default, Internal, External, etc)
auto nvdisp = nvdrv->GetDevice<Nvidia::Devices::nvdisp_disp0>("/dev/nvdisp_disp0");
ASSERT(nvdisp);
nvdisp->flip(igbp_buffer.gpu_buffer_id, igbp_buffer.offset, igbp_buffer.format,
igbp_buffer.width, igbp_buffer.height, igbp_buffer.stride, buffer->transform,
buffer->crop_rect);
buffer_queue->ReleaseBuffer(buffer->slot);
}
}
Layer::Layer(u64 id, std::shared_ptr<BufferQueue> queue) : id(id), buffer_queue(std::move(queue)) {}
Layer::~Layer() = default;
Display::Display(u64 id, std::string name) : id(id), name(std::move(name)) {
auto& kernel = Core::System::GetInstance().Kernel();
vsync_event = Kernel::Event::Create(kernel, Kernel::ResetType::Pulse, "Display VSync Event");
}
Display::~Display() = default;
} // namespace Service::NVFlinger