Original Articles
Alreshidi E. 2019, Smart sustainable agriculture (SSA) solution underpinned by internet of things (IoT) and artificial intelligence (AI). Int J Adv Comput Sci Appl 10(5):90-102. doi:10.14569/IJACSA.2019.0100513
10.14569/IJACSA.2019.0100513Anyatasia F. 2023, Investigating motivation and usage of text-to-image generative AI for creative practitioner. Available via https://helda.helsinki.fi/server/api/core/bitstreams/4edf6adb-2d67-4047-bf81-ea09a9b940f1/content
Bengio Y., Y. Lecun, and G. Hinton 2021, Deep learning for AI. Commun ACM 64(7):58-65.
10.1145/3448250Bird J.J., C.M. Barnes, L.J. Manso, A. Ekárt, and D.R. Faria 2022, Fruit quality and defect image classification with conditional GAN data augmentation. Sci Hortic 293:110684. doi:10.1016/j.scienta.2021.110684
10.1016/j.scienta.2021.110684Borji A. 2022, Generated faces in the wild: Quantitative comparison of stable diffusion, midjourney and dall-e 2. arXiv preprint arXiv:2210.00586. doi:10.48550/arXiv.2210.00586
10.48550/arXiv.2210.00586Brewer M.T., L. Lang, K. Fujimura, N. Dujmovic, S. Gray, and E. van der Knaap 2006, Development of a controlled vocabulary and software application to analyze fruit shape variation in tomato and other plant species. Plant Physiol 141:15-25. doi:10.1104/pp.106.077867
10.1104/pp.106.07786716684933PMC1459328Chang A., M. Savva, and C.D. Manning 2014, Learning spatial knowledge for text to 3D scene generation. In Prothe 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp 2028-2038.
10.3115/v1/D14-1217Creswell A., T. White, V. Dumoulin, K. Arulkumaran, B. Sengupta, and A.A. Bharath 2018, Generative adversarial networks: An overview. IEEE Signal Process Mag 35(1):53-65.
10.1109/MSP.2017.2765202Dehouche N., and K. Dehouche 2023, What's in a text-to-image prompt? The potential of stable diffusion in visual arts education. Heliyon 9(6):e16757. doi:10.1016/j.heliyon.2023.e16757
10.1016/j.heliyon.2023.e1675737292268PMC10245047Derevyanko N., and O. Zalevska 2023, Comparative analysis of neural networks Midjourney, Stable Diffusion, and DALL-E and ways of their implementation in the educational process of students of design specialities. Scientific Bulletin of Mukachevo State University Series "Pedagogy and Psychology" 9(3):36-44. doi:10.52534/msu-pp3.2023.36
10.52534/msu-pp3.2023.36Dhariwal P., and A. Nichol 2021, Diffusion models beat GANs on image synthesis. Adv Neural Inf Process Syst 34:8780-8794. doi:10.48550/arXiv.2105.05233
10.48550/arXiv.2105.05233Farooq M., A. Rehman, and M. Pisante 2019, Sustainable agriculture and food security. Innovations in Sustainable Agridoi:10.1007/978-3-030-23169-9_1
10.1007/978-3-030-23169-9_1Feldmann M.J., M.A. Hardigan, R.A. Famula, C.M. López, A. Tabb, G.S. Cole, and S.J. Knapp 2020, Multi-dimensional machine learning approaches for fruit shape phenotyping in strawberry. GigaScience 9:1-17. doi:10.1093/gigascience/giaa030
10.1093/gigascience/giaa03032352533PMC7191992Fjelland R. 2020, Why general artificial intelligence will not be realized. Humanit Soc Sci Commun 7(1):1-9. doi:10.1057/s41599-020-0494-4
10.1057/s41599-020-0494-4Gehan M.A., N. Falgren, A. Abbasi, J.C. Berry, S.T. Callen, L. Chavez, A.N. Doust, M.J. Feldman, K.B. Gilbert, J.G. Hodge, and J.S. Hoyer 2017, PlantCV v2: image analysis software for high-throughput plant phenotyping. PeerJ 5:e4088. doi:10.7717/peerj.4088
10.7717/peerj.408829209576PMC5713628Goertzel B., and C. Pennachin (Eds.) 2007, Artificial General Intelligence. Springer Berlin Heidelberg. doi:10.1007/978-3-540-68677-4
10.1007/978-3-540-68677-418058710Goodfellow I., J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio 2020, Generative adversarial networks. Commun ACM 63(11):139-144.
10.1145/3422622He K., X. Zhang, S. Ren, and J. Sun 2016, Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 770-778.
10.1109/CVPR.2016.9026180094Huang H., P.S. Yu, and C. Wang 2018, An introduction to image synthesis with generative adversarial nets. arXiv preprint arXiv:1803.04469. doi:10.48550/arXiv.1803.04469
10.48550/arXiv.1803.04469Huang Z., F. Bianchi, M. Yuksekgonul, T.J. Montine, and J. Zou 2023, A visual-language foundation model for pathology image analysis using medical twitter. Nat Med 29(9):2307-2316.
10.1038/s41591-023-02504-337592105Jie P., X. Shan, and J. Chung 2023, Comparative analysis of AI painting using [Midjourney] and [Stable Diffusion]-a case study on character drawing. Int J Adv Culture Technol 11(2):403-408. doi:10.17703/IJACT.2023.11.2.403
10.17703/IJACT.2023.11.2.403Khalifa N.E., M. Loey, and S. Mirjalili 2022, A comprehensive survey of recent trends in deep learning for digital images augmentation. Artif Intell Rev 55(3):2351-2377.
10.1007/s10462-021-10066-434511694PMC8418460Kim D., D. Joo, and J. Kim 2020, Tivgan: Text to image to video generation with step-by-step evolutionary generator. IEEE Access 8:153113-153122. doi:10.1109/ACCESS.2020.3017881
10.1109/ACCESS.2020.3017881Kim J.G., I.B. Lee, K.S. Yoon, T.H. Ha, R.W. Kim, U.H. Yeo, and S.Y. Lee 2018, A study on the trends of virtual reality application technology for agricultural education. J Bio-Env Con 27(2):147-157. doi:10.12791/KSBEC.2018.27.2.147
10.12791/KSBEC.2018.27.2.147Kwon D.H. 2024, Analysis of prompt elements and use cases in image-generating AI: focusing on Midjourney, Stable Diffusion, Firefly, DALL· E. J Digit Contents Soc 25(2):341-354. doi:10.9728/dcs.2024.25.2.341
10.9728/dcs.2024.25.2.341LeCun Y., Y. Bengio, and G. Hinton 2015, Deep learning. Nature 521(7553):436-444. doi:10.1038/nature14539
10.1038/nature1453926017442Liu J., Y. Zhou, Y. Li, Y. Li, S. Hong, Q. Li, X. Liu, M. Lu, and X. Wang 2023, Exploring the integration of digital twin and generative AI in agriculture. 2023 15th International Conference on Intelligent Human-Machine Systems and Cybernetics (IHMSC), pp 223-228. doi:10.1109/IHMSC58761.2023.00059
10.1109/IHMSC58761.2023.00059Liu V., and L.B. Chilton 2022, Design guidelines for prompt engineering text-to-image generative models. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. 1-23.
10.1145/3491102.3501825Liu Y., K. Zhang, Y. Li, Z. Yan, C. Gao, R. Chen, Z. Yuan, Y. Huang, H. Sun, J. Gao, L. He, and L. Sun 2024, Sora: a review on background, technology, limitations, and opportunities of large vision models. arXiv preprint arXiv:2402.17177. doi:10.48550/arXiv.2402.17177
10.48550/arXiv.2402.17177Lu Y., and S. Young 2020, A survey of public datasets for computer vision tasks in precision agriculture. Comput Electron Agric 178:105760.
10.1016/j.compag.2020.105760Lu Y., D. Chen, E. Olaniyi, and Y. Huang 2022, Generative adversarial networks (GANs) for image augmentation in agriculture: a systematic review. Comput Electron Agric 200:107208. doi:10.1016/j.compag.2022.107208
10.1016/j.compag.2022.107208Muller V.C., and N. Bostrom 2016, Future progress in artificial intelligence: A survey of expert opinion. Fundamental issues of artificial intelligence, pp 555-572. doi:10.1007/978-3-319-26485-1_33
10.1007/978-3-319-26485-1_33Oppenlaender J. 2023, A taxonomy of prompt modifiers for text-to-image generation. Behav Inf Technol 1-14. doi:10.1080/0144929X.2023.2286532
10.1080/0144929X.2023.2286532Oppenlaender J., R. Linder, and J. Silvennoinen 2023, Prompting AI art: an investigation into the creative skill of prompt engineering. arXiv preprint arXiv:2303.13534. doi:10.48550/arXiv.2303.13534
10.48550/arXiv.2303.13534Or-El R., X. Luo, M. Shan, E. Shechtman, J.J. Park, and I. Kemelmacher-Shlizerman 2022, Stylesdf: High-resolution 3d-consistent image and geometry generation. Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit, pp 13503-13513.
10.1109/CVPR52688.2022.01314Pavlichenko N., and D. Ustalov 2023, Best prompts for text-to-image models and how to find them. In Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp 2067-2071. doi:10.1145/3539618.3592000
10.1145/3539618.3592000Plant R., G. Pettygrove, and W. Reinert 2000, Precision agriculture can increase profits and limit environmental impacts. Calif Agric 54(4):66-71. doi:10.3733/ca.v054n04p66
10.3733/ca.v054n04p66Poole B., A. Jain, J.T. Barron, and B. Mildenhall 2022, Dreamfusion: Text-to-3d using 2d diffusion. arXiv preprint arXiv:2209.14988. doi:10.48550/arXiv.2209.14988
10.48550/arXiv.2209.14988Radford A., J.W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell, P. Mishkin, J. Clark, G. Krueger and I. Sutskever 2021, Learning transferable visual models from natural language supervision. In International conference on machine learning, pp 8748-8763. PMLR. doi:10.48550/arXiv.2103.00020
10.48550/arXiv.2103.00020Reviriego P., and E. Merino-Gómez 2022, Text to image generation: leaving no language behind. arXiv preprint arXiv:2208.09333. doi:10.48550/arXiv.2208.09333
10.48550/arXiv.2208.09333Rural Development Administration (RDA) 2013, Available via https://www.rda.go.kr:2360/ptoPtoFrmPrmnList.do?prgId=pto_farmprmnptoEntry
Sapkota R., D. Ahmed, and M. Karkee 2024, Creating image datasets in agricultural environments using DALL. E: generative AI-powered large language model. arXiv preprint arXiv:2307.08789. doi:10.48550/arXiv.2307.08789
10.32388/A8DYJ7Shorten C., and T.M. Khoshgoftaar 2019, A survey on image data augmentation for deep learning. J Big Data 6(1):1-48.
10.1186/s40537-019-0197-0Stöckl A. 2023, Evaluating a synthetic image dataset generated with stable diffusion. In International Congress on Information and Communication Technology, pp 805-818. Singapore: Springer Nature Singapore. doi:10.48550/arXiv.2211.01777
10.1007/978-981-99-3243-6_64Vougioukas S.G. 2019, Agricultural robotics. Annu Rev Control Robot Auton Syst 2:365-392.
10.1146/annurev-control-053018-023617Wakchaure M., B.K. Patle, and A.K. Mahindrakar 2023, Application of AI techniques and robotics in agriculture: a review. Artif Intell Life Sci 3:100057. doi:10.1016/j.ailsci.2023.100057
10.1016/j.ailsci.2023.100057Wasielewski A. 2023, Midjourney can't count": questions of representation and meaning for text-to-image generators. Interdiscip J Image Sci 37(1):71-82. Retrieved from https://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-510407
Wong S.C., A. Gatt, V. Stamatescu, and M.D. McDonnell 2016, Understanding data augmentation for classification: when to warp?. In 2016 International Conference on Digital Image Computing: Techniques and Applications, pp 1-6.
10.1109/DICTA.2016.7797091Wu J., Y. Wang, T. Xue, X. Sun, B. Freeman, and J. Tenenbaum 2017, Marrnet: 3d shape reconstruction via 2.5 d sketches. Adv Neural Inf Process Syst 30. doi:10.48550/arXiv.1711.03129
10.48550/arXiv.1711.03129Yin H., Z. Zhang, and Y. Liu 2023, The exploration of integrating the Midjourney artificial intelligence generated content tool into design systems to direct designers towards future-oriented innovation. Systems 11(12):566. doi:10.3390/systems11120566
10.3390/systemsZhai X., A. Kolesnikov, N. Houlsby, and L. Beyer 2022, Scaling vision transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 12104-12113.
10.1109/CVPR52688.2022.01179- Publisher :The Korean Society for Bio-Environment Control
- Publisher(Ko) :(사)한국생물환경조절학회
- Journal Title :Journal of Bio-Environment Control
- Journal Title(Ko) :생물환경조절학회지
- Volume : 33
- No :2
- Pages :120-128
- Received Date : 2024-04-16
- Revised Date : 2024-04-26
- Accepted Date : 2024-04-29
- DOI :https://doi.org/10.12791/KSBEC.2024.33.2.120