Skip to main content
added 17 characters in body
Source Link
Martynas Žiemys
  • 25.9k
  • 2
  • 36
  • 78

To clarify, I was talking with a friend and we started to wonder if there were camera modes beyond orthographic and perspective. As an example, one that defined its edges not with a cuboid or frustum, but a curved volume?

There are all kinds of cool effects that could be done with that, especially if the curvature was animated. However, I know that perspective and orthographic modes are generally just done with matrices, so it might take some tinkering.

Blender is open to add on construction and open source at that, but I honestly have no idea where to begin. Would it be Python, or can I do it in OSL? Or would I have to actually dive into C for this?

Hopefully this question isn't too broad.

EDIT: I am purely attempting to extend the camera modes, not to simulate it with post processing (which has a number of additional drawbacks in efficiency). This question is about plugin development.

These are the familiar viewing transforms we use on a day to day basis, outlined by volume of rendered geometry (not volume of geometry, volume of rendered space) :

Perspective - smaller with depth:

Perspectiveenter image description here

Panoramic - projected from center in all direction:

Panoramic

Orthographic - same size:

Orthographic

In all cases, depth goes along a straight path in a specific direction, in Blender it's typically the camera's local Z. For panoramic, it is local r in sphere-space. Note that in all cases, depth is measured along straight lines, in a simplistic manner.

My proposal is to allow for the construction of cameras which are not confined to simplistic single-dimension radius, but actually measure depth along a curve:

enter image description here

As an example, with a rendering space which was defined like a curved volume, with a taper curve and a bevel curve, we could still define (in linear time) a path to each rendered pixel in Cycles; but that path would be unconstrained, and could say look at an object behind the red demo object. It would also only gather necessary data, where gathering all data from a panoramic camera and operating on that would still spend roughly six times the amount of time to render, plus post processing; and still wouldn't have all of the data necessary.

To clarify, I was talking with a friend and we started to wonder if there were camera modes beyond orthographic and perspective. As an example, one that defined its edges not with a cuboid or frustum, but a curved volume?

There are all kinds of cool effects that could be done with that, especially if the curvature was animated. However, I know that perspective and orthographic modes are generally just done with matrices, so it might take some tinkering.

Blender is open to add on construction and open source at that, but I honestly have no idea where to begin. Would it be Python, or can I do it in OSL? Or would I have to actually dive into C for this?

Hopefully this question isn't too broad.

EDIT: I am purely attempting to extend the camera modes, not to simulate it with post processing (which has a number of additional drawbacks in efficiency). This question is about plugin development.

These are the familiar viewing transforms we use on a day to day basis, outlined by volume of rendered geometry (not volume of geometry, volume of rendered space) :

Perspective - smaller with depth:

Perspective

Panoramic - projected from center in all direction:

Panoramic

Orthographic - same size:

Orthographic

In all cases, depth goes along a straight path in a specific direction, in Blender it's typically the camera's local Z. For panoramic, it is local r in sphere-space. Note that in all cases, depth is measured along straight lines, in a simplistic manner.

My proposal is to allow for the construction of cameras which are not confined to simplistic single-dimension radius, but actually measure depth along a curve:

enter image description here

As an example, with a rendering space which was defined like a curved volume, with a taper curve and a bevel curve, we could still define (in linear time) a path to each rendered pixel in Cycles; but that path would be unconstrained, and could say look at an object behind the red demo object. It would also only gather necessary data, where gathering all data from a panoramic camera and operating on that would still spend roughly six times the amount of time to render, plus post processing; and still wouldn't have all of the data necessary.

To clarify, I was talking with a friend and we started to wonder if there were camera modes beyond orthographic and perspective. As an example, one that defined its edges not with a cuboid or frustum, but a curved volume?

There are all kinds of cool effects that could be done with that, especially if the curvature was animated. However, I know that perspective and orthographic modes are generally just done with matrices, so it might take some tinkering.

Blender is open to add on construction and open source at that, but I honestly have no idea where to begin. Would it be Python, or can I do it in OSL? Or would I have to actually dive into C for this?

Hopefully this question isn't too broad.

EDIT: I am purely attempting to extend the camera modes, not to simulate it with post processing (which has a number of additional drawbacks in efficiency). This question is about plugin development.

These are the familiar viewing transforms we use on a day to day basis, outlined by volume of rendered geometry (not volume of geometry, volume of rendered space) :

Perspective - smaller with depth:

enter image description here

Panoramic - projected from center in all direction:

Panoramic

Orthographic - same size:

Orthographic

In all cases, depth goes along a straight path in a specific direction, in Blender it's typically the camera's local Z. For panoramic, it is local r in sphere-space. Note that in all cases, depth is measured along straight lines, in a simplistic manner.

My proposal is to allow for the construction of cameras which are not confined to simplistic single-dimension radius, but actually measure depth along a curve:

enter image description here

As an example, with a rendering space which was defined like a curved volume, with a taper curve and a bevel curve, we could still define (in linear time) a path to each rendered pixel in Cycles; but that path would be unconstrained, and could say look at an object behind the red demo object. It would also only gather necessary data, where gathering all data from a panoramic camera and operating on that would still spend roughly six times the amount of time to render, plus post processing; and still wouldn't have all of the data necessary.

deleted 227 characters in body; edited title
Source Link
Martynas Žiemys
  • 25.9k
  • 2
  • 36
  • 78

To clarify, I was talking with a friend and we started to wonder if there were camera modes beyond orthographic and perspective. As an example, one that defined its edges not with a cuboid or frustum, but a curved volume?

There are all kinds of cool effects that could be done with that, especially if the curvature was animated. However, I know that perspective and orthographic modes are generally just done with matrices, so it might take some tinkering.

Blender is open to add on construction and open source at that, but I honestly have no idea where to begin. Would it be Python, or can I do it in OSL? Or would I have to actually dive into C for this?

Hopefully this question isn't too broad.

EDIT: I am purely attempting to extend the camera modes, not to simulate it with post processing (which has a number of additional drawbacks in efficiency). This question is about plugin development.

These are the familiar viewing transforms we use on a day to day basis, outlined by volume of rendered geometry (not volume of geometry, volume of rendered space) :

Perspective - smaller with depth:

Perspective

Panoramic - projected from center in all direction:

Panoramic

Orthographic - same size:

Orthographic

In all cases, depth goes along a straight path in a specific direction, in Blender it's typically the camera's local Z. For panoramic, it is local r in sphere-space. Note that in all cases, depth is measured along straight lines, in a simplistic manner.

My proposal is to allow for the construction of cameras which are not confined to simplistic single-dimension radius, but actually measure depth along a curve:

enter image description here

As an example, with a rendering space which was defined like a curvecurved volume, with a taper curve and a bevel curve, we could still define (in linear time) a path to each rendered pixel in Cycles; but that path would be unconstrained, and could say look at an object behind the demo red demo object. It would also only gather necessary data, where gathering all data from a panoramic camera and operating on that would still spend roughly six times the amount of time to render, plus post processing; and still wouldn't have all of the data necessary.

To clarify, I was talking with a friend and we started to wonder if there were camera modes beyond orthographic and perspective. As an example, one that defined its edges not with a cuboid or frustum, but a curved volume?

There are all kinds of cool effects that could be done with that, especially if the curvature was animated. However, I know that perspective and orthographic modes are generally just done with matrices, so it might take some tinkering.

Blender is open to add on construction and open source at that, but I honestly have no idea where to begin. Would it be Python, or can I do it in OSL? Or would I have to actually dive into C for this?

Hopefully this question isn't too broad.

EDIT: I am purely attempting to extend the camera modes, not to simulate it with post processing (which has a number of additional drawbacks in efficiency). This question is about plugin development.

These are the familiar viewing transforms we use on a day to day basis, outlined by volume of rendered geometry (not volume of geometry, volume of rendered space) :

Perspective - smaller with depth:

Perspective

Panoramic - projected from center in all direction:

Panoramic

Orthographic - same size:

Orthographic

In all cases, depth goes along a straight path in a specific direction, in Blender it's typically the camera's local Z. For panoramic, it is local r in sphere-space. Note that in all cases, depth is measured along straight lines, in a simplistic manner.

My proposal is to allow for the construction of cameras which are not confined to simplistic single-dimension radius, but actually measure depth along a curve:

enter image description here

As an example, with a rendering space which was defined like a curve, with a taper curve and a bevel curve, we could still define (in linear time) a path to each rendered pixel in Cycles; but that path would be unconstrained, and could say look at an object behind the demo red object. It would also only gather necessary data, where gathering all data from a panoramic camera and operating on that would still spend roughly six times the amount of time to render, plus post processing; and still wouldn't have all of the data necessary.

To clarify, I was talking with a friend and we started to wonder if there were camera modes beyond orthographic and perspective. As an example, one that defined its edges not with a cuboid or frustum, but a curved volume?

There are all kinds of cool effects that could be done with that, especially if the curvature was animated. However, I know that perspective and orthographic modes are generally just done with matrices, so it might take some tinkering.

Blender is open to add on construction and open source at that, but I honestly have no idea where to begin. Would it be Python, or can I do it in OSL? Or would I have to actually dive into C for this?

Hopefully this question isn't too broad.

EDIT: I am purely attempting to extend the camera modes, not to simulate it with post processing (which has a number of additional drawbacks in efficiency). This question is about plugin development.

These are the familiar viewing transforms we use on a day to day basis, outlined by volume of rendered geometry (not volume of geometry, volume of rendered space) :

Perspective - smaller with depth:

Perspective

Panoramic - projected from center in all direction:

Panoramic

Orthographic - same size:

Orthographic

In all cases, depth goes along a straight path in a specific direction, in Blender it's typically the camera's local Z. For panoramic, it is local r in sphere-space. Note that in all cases, depth is measured along straight lines, in a simplistic manner.

My proposal is to allow for the construction of cameras which are not confined to simplistic single-dimension radius, but actually measure depth along a curve:

enter image description here

As an example, with a rendering space which was defined like a curved volume, with a taper curve and a bevel curve, we could still define (in linear time) a path to each rendered pixel in Cycles; but that path would be unconstrained, and could say look at an object behind the red demo object. It would also only gather necessary data, where gathering all data from a panoramic camera and operating on that would still spend roughly six times the amount of time to render, plus post processing; and still wouldn't have all of the data necessary.

deleted 227 characters in body; edited title
Source Link
Martynas Žiemys
  • 25.9k
  • 2
  • 36
  • 78

Is it possible to create a custom Camera modeprojection volume modes?

To clarify, I was talking with a friend and we started to wonder if there were camera modes beyond orthographic and perspective. As an example, one that defined its edges not with a cuboid or frustrumfrustum, but a Bezier curvecurved volume?

There are all kinds of cool effects that could be done with that, especially if the curvecurvature was animated. However, I know that perspective and orthographic modes are generally just done with matrices, so it might take some tinkering.

Blender is open to add on construction and open source at that, but I honestly have no idea where to begin. Would it be Python, or can I do it in OSL? Or would I have to actually dive into C for this?

Hopefully this question isn't too broad.

EDIT: I am purely attempting to extend the camera modes, not to simulate it with post processing (which has a number of additional drawbacks in efficiency). This question is about plugin development.

CLARIFICATION: I can see that using geometric terminology to describing viewing volumes is losing some of the SE, so allow me to (rapidly--sorry about that!) illustrate.

These are the familiar viewing transforms we use on a day to day basis, outlined by volume of rendered geometry. (Notnot volume of geometry, volume of rendered space.) :

Perspective Frustrum Orthographic Cuboid Panoramic Spheroid Perspective - smaller with depth:

Perspective

Panoramic - projected from center in all direction:

Panoramic

Orthographic - same size:

Orthographic

In all cases, depth is measuredgoes along a straight path in a specific direction, in Blender it's typically the camera's local Z. For panoramic, it is local r in sphere-space. Note that in all cases, depth is measured along straight lines, in a simplistic manner.

Non-infinite-radius Curve-guided Custom Camera Space

My proposal is to allow for the construction of cameras which are not confined to simplistic single-dimension radius, but actually measure depth along a curve (not even necessarily a Bezier one, but a curved pseudo-dimension). This allows for rendering material which would otherwise simply not exist in the data set.:

enter image description here

As an example, with a rendering space which was defined like a curve, with a taper curve and a bevel curve, we could still define (in linear time) a path to each rendered pixel in Cycles; but that path would be unconstrained, and could say look at an object behind the demo stop signred object. It would also only gather necessary data, where gathering all data from a panoramic camera and operating on that would still spend roughly six times the amount of time to render, plus post processing; and still wouldn't have all of the data necessary.

Is it possible to create a custom Camera mode?

To clarify, I was talking with a friend and we started to wonder if there were camera modes beyond orthographic and perspective. As an example, one that defined its edges not with a cuboid or frustrum, but a Bezier curve?

There are all kinds of cool effects that could be done with that, especially if the curve was animated. However, I know that perspective and orthographic modes are generally just done with matrices, so it might take some tinkering.

Blender is open to add on construction and open source at that, but I honestly have no idea where to begin. Would it be Python, or can I do it in OSL? Or would I have to actually dive into C for this?

Hopefully this question isn't too broad.

EDIT: I am purely attempting to extend the camera modes, not to simulate it with post processing (which has a number of additional drawbacks in efficiency). This question is about plugin development.

CLARIFICATION: I can see that using geometric terminology to describing viewing volumes is losing some of the SE, so allow me to (rapidly--sorry about that!) illustrate.

These are the familiar viewing transforms we use on a day to day basis, outlined by volume of rendered geometry. (Not volume of geometry, volume of rendered space.)

Perspective Frustrum Orthographic Cuboid Panoramic Spheroid

In all cases, depth is measured along a specific direction, in Blender it's typically the camera's local Z. For panoramic, it is local r in sphere-space. Note that in all cases, depth is measured along straight lines, in a simplistic manner.

Non-infinite-radius Curve-guided Custom Camera Space

My proposal is to allow for the construction of cameras which are not confined to simplistic single-dimension radius, but actually measure along a curve (not even necessarily a Bezier one, but a curved pseudo-dimension). This allows for rendering material which would otherwise simply not exist in the data set.

As an example, with a rendering space which was defined like a curve, with a taper curve and a bevel curve, we could still define (in linear time) a path to each rendered pixel in Cycles; but that path would be unconstrained, and could say look at an object behind the demo stop sign. It would also only gather necessary data, where gathering all data from a panoramic camera and operating on that would still spend roughly six times the amount of time to render, plus post processing; and still wouldn't have all of the data necessary.

Is it possible to create custom Camera projection volume modes?

To clarify, I was talking with a friend and we started to wonder if there were camera modes beyond orthographic and perspective. As an example, one that defined its edges not with a cuboid or frustum, but a curved volume?

There are all kinds of cool effects that could be done with that, especially if the curvature was animated. However, I know that perspective and orthographic modes are generally just done with matrices, so it might take some tinkering.

Blender is open to add on construction and open source at that, but I honestly have no idea where to begin. Would it be Python, or can I do it in OSL? Or would I have to actually dive into C for this?

Hopefully this question isn't too broad.

EDIT: I am purely attempting to extend the camera modes, not to simulate it with post processing (which has a number of additional drawbacks in efficiency). This question is about plugin development.

These are the familiar viewing transforms we use on a day to day basis, outlined by volume of rendered geometry (not volume of geometry, volume of rendered space) :

Perspective - smaller with depth:

Perspective

Panoramic - projected from center in all direction:

Panoramic

Orthographic - same size:

Orthographic

In all cases, depth goes along a straight path in a specific direction, in Blender it's typically the camera's local Z. For panoramic, it is local r in sphere-space. Note that in all cases, depth is measured along straight lines, in a simplistic manner.

My proposal is to allow for the construction of cameras which are not confined to simplistic single-dimension radius, but actually measure depth along a curve:

enter image description here

As an example, with a rendering space which was defined like a curve, with a taper curve and a bevel curve, we could still define (in linear time) a path to each rendered pixel in Cycles; but that path would be unconstrained, and could say look at an object behind the demo red object. It would also only gather necessary data, where gathering all data from a panoramic camera and operating on that would still spend roughly six times the amount of time to render, plus post processing; and still wouldn't have all of the data necessary.

Added clearer description of the problem at hand, and explained why post does not enter into the scope.
Source Link
Michael Macha
  • 1.2k
  • 9
  • 21
Loading
added 202 characters in body
Source Link
Michael Macha
  • 1.2k
  • 9
  • 21
Loading
removed tags that were related to a possible answer instead of the actual question.
Source Link
Ray Mairlot
  • 29.3k
  • 12
  • 105
  • 126
Loading
Source Link
Michael Macha
  • 1.2k
  • 9
  • 21
Loading