ChaoticNeutralCzech 1h ago • 100%
The posts seem to be getting better lately
Artist: [Onion-Oni](https://m-10ka.deviantart.com) aka TenTh from Random-tan Studio Original post: [*Kebab-chan*](https://tapas.io/episode/1047568) on Tapas (warning: JS-heavy site) Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). [Original](https://files.catbox.moe/2bzuyi.png) *Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea* 🇰🇷 *and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.*
Artist: [Onion-Oni](https://m-10ka.deviantart.com) aka TenTh from Random-tan Studio Original post: [*Buran*](https://tapas.io/episode/1039944) on Tapas (warning: JS-heavy site) Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). [Original](https://files.catbox.moe/ngcybg.png) *Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea* 🇰🇷 *and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.*
ChaoticNeutralCzech 3d ago • 100%
TenTh also did Crabsquid but I'm posting the images in order, it will be its turn in about 2 weeks.
Here are some results from elsewhere on the internet, mostly by Dino-Rex-Makes. Feel free to feed the links to your posting script and schedule them.
![Lava Larva]https://i.redd.it/528mulzeibt61.png
![Pengwings]https://i.redd.it/z6qikap6fcv61.png
![Crabsquid+Ampeel]https://i.redd.it/2fbao0vrkqw61.png
![Peeper]https://i.redd.it/bvx5p6gkcep61.png
![Peeper]https://i.redd.it/zma8a9qi8c7c1.jpeg
![Cuddlefish]https://i.redd.it/wmm5j3p8jrt61.png
![Sea Monkey]https://i.redd.it/lk8h1z9p4i071.png
![Crashfish]https://i.redd.it/qa1uf8m8ebx61.png
![Crashfish]https://i.redd.it/2msqcg358by61.png
![Warper]https://i.redd.it/95t4d1ihmhx61.png
![Mesmer]https://i.redd.it/qzjpby36mpy61.png
![Mesmer]https://i.redd.it/snc59psjm1f01.jpg
![Yellow Sub-MER-ine]https://i.redd.it/4dddf40c5ww41.png
Artist: [Onion-Oni](https://m-10ka.deviantart.com) aka TenTh from Random-tan Studio Original post: [*Longleg*](https://tapas.io/episode/1024920) on Tapas (warning: JS-heavy site) Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). [Original](https://files.catbox.moe/8v60he.png) *Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea* 🇰🇷 *and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.*
Artist: [Onion-Oni](https://m-10ka.deviantart.com) aka TenTh from Random-tan Studio Original post: [*Seamoth*](https://tapas.io/episode/1018654) on Tapas (warning: JS-heavy site) Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). [Original](https://files.catbox.moe/jjmoc5.png) *Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea* 🇰🇷 *and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.*
Artist: [Onion-Oni](https://m-10ka.deviantart.com) aka TenTh from Random-tan Studio Original post: [*Pod 153*](https://tapas.io/episode/1011507) on Tapas (warning: JS-heavy site) Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). [Original](https://files.catbox.moe/ozpldw.png)
ChaoticNeutralCzech 5d ago • 100%
You know.
I don't... Is there a disgusting story specific to the flamethrower?
Anyway, Elon Musk's enterprises were never not full of stupid ideas. He wanted to pay for his extensive tunnel network just by selling bricks from the displaced soil. Did he expect millions of them to go for hundreds of dollars like limited-edition Supreme-branded ones? Or consider why roads were ever built on the surface if tunnels were so easy and profitable?
Around this time, he also claimed that he had perfected solar roof tiles while the demo houses actually featured no functional prototypes. The few units delivered were bad at either purpose. This didn't get nearly as much backlash as it should have but hyperloop hype was still strong back then.
Artist: [Onion-Oni](https://m-10ka.deviantart.com) aka TenTh from Random-tan Studio Original post: [*The Boring Girl*](https://tapas.io/episode/1003152) on Tapas (warning: JS-heavy site) Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). [Original](https://files.catbox.moe/en7eh9.png) *Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea* 🇰🇷 *and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.*
Artist: [Onion-Oni](https://m-10ka.deviantart.com) aka TenTh from Random-tan Studio Original post: [*#Humanization 30*](https://tapas.io/episode/801672) on Tapas (warning: JS-heavy site) Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). [Original](https://files.catbox.moe/2llg1i.png) *Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea* 🇰🇷 *and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.*
Artist: [Onion-Oni](https://m-10ka.deviantart.com) aka TenTh from Random-tan Studio Original post: [*#Humanization 30*](https://tapas.io/episode/801672) on Tapas (warning: JS-heavy site) Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). [Original](https://files.catbox.moe/c24efn.png) *Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea* 🇰🇷 *and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.*
Artist: [Onion-Oni](https://m-10ka.deviantart.com) aka TenTh from Random-tan Studio Original post: [*#Humanization 29*](https://tapas.io/episode/794529) on Tapas (warning: JS-heavy site) Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). [Original](https://files.catbox.moe/doq12n.png) *Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea* 🇰🇷 *and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.*
Artist: [Onion-Oni](https://m-10ka.deviantart.com) aka TenTh from Random-tan Studio Original post: [*#Humanization 29*](https://tapas.io/episode/794529) on Tapas (warning: JS-heavy site) Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). [Original](https://files.catbox.moe/aff9do.png) *Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea* 🇰🇷 *and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.*
Artist: [Onion-Oni](https://m-10ka.deviantart.com) aka TenTh from Random-tan Studio Original post: [*#Humanization 28*](https://tapas.io/episode/786547) on Tapas (warning: JS-heavy site) Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). [Original](https://files.catbox.moe/nx71at.png) *Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea* 🇰🇷 *and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.*
ChaoticNeutralCzech 1w ago • 100%
This is one of the more realistic body shapes you'll see on !morphmoe@ani.social.
If you want to block all moe communities, they are conveniently listed in the sidebar.
Artist: [Onion-Oni](https://m-10ka.deviantart.com) aka TenTh from Random-tan Studio Original post: [*#Humanization 28*](https://tapas.io/episode/786547) on Tapas (warning: JS-heavy site) Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). [Original](https://files.catbox.moe/svad0n.png) *Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea* 🇰🇷 *and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.*
Artist: [Onion-Oni](https://m-10ka.deviantart.com) aka TenTh from Random-tan Studio Original post: [*#Humanization 27*](https://tapas.io/episode/780859) on Tapas (warning: JS-heavy site) Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). [Original](https://files.catbox.moe/ujad6r.png) *Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea* 🇰🇷 *and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.*
Artist: [Onion-Oni](https://m-10ka.deviantart.com) aka TenTh from Random-tan Studio Original post: [*#Humanization 27*](https://tapas.io/episode/780859) on Tapas (warning: JS-heavy site) Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). [Original](https://files.catbox.moe/tsix87.png) *Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea* 🇰🇷 *and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.*
Artist: [Onion-Oni](https://m-10ka.deviantart.com) aka TenTh from Random-tan Studio Original post: [*#Humanization 26*](https://tapas.io/episode/769533) on Tapas (warning: JS-heavy site) Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). [Original](https://files.catbox.moe/poegfv.png) *Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea* 🇰🇷 *and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.*
Artist: [Onion-Oni](https://m-10ka.deviantart.com) aka TenTh from Random-tan Studio Original post: [*#Humanization 26*](https://tapas.io/episode/769533) on Tapas (warning: JS-heavy site) Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). [Original](https://files.catbox.moe/aoeq7u.png) *Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea* 🇰🇷 *and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.*
ChaoticNeutralCzech 2w ago • 100%
In real mirror pics, the phone is always perfectly aligned with the frame (obviously).
Artist: [Onion-Oni](https://m-10ka.deviantart.com) aka TenTh from Random-tan Studio Original post: [*#Humanization 25*](https://tapas.io/episode/761804) on Tapas (warning: JS-heavy site) Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). [Original](https://files.catbox.moe/sbipi8.png) *Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea* 🇰🇷 *and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.*
Artist: [Onion-Oni](https://m-10ka.deviantart.com) aka TenTh from Random-tan Studio Original post: [*#Humanization 25*](https://tapas.io/episode/761804) on Tapas (warning: JS-heavy site) Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). [Original](https://files.catbox.moe/a3i5rq.png) Edit: catbox.moe is only down for me for some reason, VPN works
Artist: [Onion-Oni](https://m-10ka.deviantart.com) aka TenTh from Random-tan Studio Original post: [*#Humanization 24*](https://tapas.io/episode/725534) on Tapas (warning: JS-heavy site) Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). [Original](https://files.catbox.moe/0uvohn.png) Edit: catbox.moe is only down for me for some reason, VPN works
Artist: [Onion-Oni](https://m-10ka.deviantart.com) aka TenTh from Random-tan Studio Original post: [*#Humanization 24*](https://tapas.io/episode/725534) on Tapas (warning: JS-heavy site) Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). [Original](https://files.catbox.moe/20yeub.png) *Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea* 🇰🇷 *and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.*
ChaoticNeutralCzech 3w ago • 100%
Needs more ads plastered at weird spots.
ChaoticNeutralCzech 3w ago • 100%
Actually, shaggy mane (Coprinus comatus) is edible.
ChaoticNeutralCzech 3w ago • 100%
Rare OC on Lemmy. Thanks for this!
ChaoticNeutralCzech 4w ago • 66%
A little voodoo doll version of herself on that spear... Kinky
ChaoticNeutralCzech 1mo ago • 100%
It's the legs, you...
ChaoticNeutralCzech 1mo ago • 100%
Four, actually, and it's still missing two from the product it's supposed to represent (they could be removable though).
ChaoticNeutralCzech 1mo ago • 100%
What is that metal instrument on her back that says "𝔖𝔪𝔬𝔨𝔦𝔫𝔤 𝔎𝔦𝔩𝔩𝔰"?
ChaoticNeutralCzech 1mo ago • 100%
And the hat features a quote from Homer's Iliad:
...Ύπνω και Θανάτω διδυμάοσιν.
"...of Sleep and Death, who are twin brothers." This refers to the fraternal relationship of the respective deieties, Hypnos and Thanatos.
ChaoticNeutralCzech 1mo ago • 100%
The ship says
Πάσιν ημίν κατθανείν οφείλεται
This is Greek for "Death is a debt which every one of us must pay", a quote from Euripides' play Alcestis.
ChaoticNeutralCzech 1mo ago • 100%
It is obviously pretending to be a historical artifact but then it proudly says "QUARTZ", indicating there's probably just a cheap modern movement inside.
The waifu is nice though, I like the thigh clasp.
ChaoticNeutralCzech 1mo ago • 100%
They are hard to separate but when you do, they both become half N and half S. No monopoles allowed!
ChaoticNeutralCzech 2mo ago • 100%
It wouldn't last long, hurt a lot and smell horribly... unless you can fake the fire and lightning effect with fluorescent paint in a UV-lit venue. I don't think LEDs can do this yet if the dress is meant to be comfortable.
ChaoticNeutralCzech 2mo ago • 100%
The Random-tan Studio "Humanization" pics I've been posting follow a pattern. See if you can spot it.