NodeIO problem of use #1621
-
Hi DonMcCurdy, I should start by saying that I am a student working on a small project.I thank you for your monumental work in many areas, from gltf-viewer to this library for transforming gltf/glb files.
The question that bothers me the most consists of 2 sub-questions.I have read the documentation, as you can see in the example I am working with WebIO but I noticed that NodeIO has additional features that can be used.Bottom line, the problem is that :
and in the Loader method i changed only KTX2_Loader_func
but this option does not work, although there is a minimal change in the logic I know,that blob url is created on client-browser,so such url can't be accepted in Node, but how can I make it the other way? I read that loaders on the server is very problematic to implement, is it related ? One last question, when dragging and dropping model files, how do I realise that the model ‘Requires spec/gloss materials (KHR_materials_pbrSpecularGlossiness), which this viewer cannot display. Materials will be converted to metal/rough.’ I noticed that after const gltf_trans = await io.read(url) there is ‘failed to fetch ex.’. And that's the only reason I could switch to another normal loader as you can see in the first code. That's all, I apologise if it seems silly or illogical what I'm doing, I'm just trying to somehow understand the logic of your applications and do something for myself. |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 1 reply
-
Hi @Oleksandr-Fedorov! I'm not sure if I understand the flow of data through your application, is this what you had in mind?
If so, here are a few comments:
|
Beta Was this translation helpful? Give feedback.
-
That's fine then, let's assume the processing should be server-side for your application.
I would suggest starting first with just support for self-contained files like
In a production application I would put large binary resources into object storage (such as Google Cloud Storage or Amazon S3) rather than a relational database, and then store references to those in my relational database like Postgres. But for small projects and learning or prototyping purposes, that's overkill.
I'm not sure if I followed this question entirely, but the 'Document' is an in-memory representation of a glTF file used for processing/editing by glTF Transform. This representation can be converted to/from an actual glTF file using the I/O classes. When you parse/load the model in THREE.GLTFLoader, you're parsing the glTF file and creating from it a three.js scene graph. These three types (glTF Transform Document, glTF file, and three.js scene graph) are distinct, you can't apply glTF Transform optimizations directly to a three.js scene graph for example. Only the glTF file (not the in-memory Document or scene graph) can be transferred over a network.
Yes, your code example looks good! If you then run the metalRough() transform, the extension will be removed. |
Beta Was this translation helpful? Give feedback.
That's fine then, let's assume the processing should be server-side for your application.
I would suggest starting first with just support for self-contained files like
.glb
, and only later adding support for folders or.zip
archives, just to get the basic pipeline working first. Or even simpler, perhaps start by just trying to send a simple typed array — not a JSON array — likenew Uint8Array([1, 2, 3, 4])
from the webpage to…